Message ID | 20240129193512.123145-3-lokeshgidra@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp789768dyb; Mon, 29 Jan 2024 11:36:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IG2ya1AiGnAXn8yXyxi1dd69cFsXSzJ0aEkcW1twUNpKkVwLuDMa0aeFuFRZbC9extkJA+V X-Received: by 2002:a05:6a21:3407:b0:19e:2604:756e with SMTP id yn7-20020a056a21340700b0019e2604756emr464392pzb.38.1706556999518; Mon, 29 Jan 2024 11:36:39 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706556999; cv=pass; d=google.com; s=arc-20160816; b=Iwpv9NeaqddTWqkRVEgn8AcGvd9WW6TPosIX/1N70bSjM+CaRZa8cV/y0K0UTPLRE+ fB+h7SOD/NW9CaWjWlPaCWNpcjZ1ZA+ScTUkx/94BQWSkn06GlLd3ws5/m9Xo1wlNLkD wCVAew9Soqj6C0hLHMex8M5oZRSw2ZSloR9k3gNMKOosY6TEnj4psq7P5evdZOrd/0bg f9JKXmp1t1LDzAP1GuUp57v1ZdKqj6pvTuEPnvn6hFFrBpSO+iINqpczdI3Norl1Jxvd JRsPFhJll00NN9QQeTnCTRti7DFGai5tqPuGGL92UbuDO88MujzoE/CI/nefa6QLKKlG DVbw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=d58A86vZ58W9GlIkgHFHlo47r/EdM3AnN+I4lpZG9DM=; fh=YpYg+AMOLH4GRgVOvnwFLenCYLn09uoxMCo5ibUfZzo=; b=sNBhkRX3wq09cvQM3JV9hdXVVyic9oDCNa+Rlmc500Kif5qpNo/nFN7mZE7ynDkEdc JiZXu5i6ype2hK8osSqF+PH/t0ppWSe8f44bn93OQ2fxYShJoUNfCTc2cx3O5pxsMTqv 52dOLrKhL6h/tIDAD+gxT3dSMirsdQ/+Sizq8TjAhqsu/2xZhCSuDd0jE5ZJLUEzGpf8 7GNB9cuGdFwOJ4jLpi1Ed1YIuYGE8JR0qRnY22b5axkpJD6GwOzogyTsuhFxvILl1FuL CVsAkV6XRhJxaXbVl5DAtN9lXDbIDtRI/7/vdxNcz3EiWZrsDvIM1pgijD9b+/zy4w2K AGyw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=aHENsEAY; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id ld10-20020a056a004f8a00b006da04ab75basi6162375pfb.278.2024.01.29.11.36.39 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 11:36:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=aHENsEAY; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 4396828229D for <ouuuleilei@gmail.com>; Mon, 29 Jan 2024 19:36:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 31D37157054; Mon, 29 Jan 2024 19:35:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aHENsEAY" Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86BD5152E1B for <linux-kernel@vger.kernel.org>; Mon, 29 Jan 2024 19:35:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556934; cv=none; b=SmWWOMrH56MCDkZJd9DcDA24kUVoO/jVR12nN2+uD9UZNiz6wJPDeoJ/S6MSyTfHE28I2GtishmXP5T/lYlm12l/8E4vuRPE4KvAtJMYs1ZdSWmh4eN1MO5RcpyI2efpbjeu4KvFlapZVn5K81nb1iPypaOooP1Leke5d1M0peM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556934; c=relaxed/simple; bh=VlWgkrKHwm88S5G8qp6WyRk6uzfCPiutFp9GMzxJmgs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VEsYhvqvkzbL3Msr8fV7NQpctWZaDFj0Q8I0sfgV4P4QtwImTprQKcBohLljGelxtvrk7hqpmnwAyjTF9zPTPfwgNuOliOgxQAG/LDQdpM+SVEPJ/ys7GHJmwRQLQhS+6u7tEKQNhPuudHP8ssEWgsVq04R5kQnhpdqNyiO4Acw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aHENsEAY; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5ffa2bb4810so35436307b3.0 for <linux-kernel@vger.kernel.org>; Mon, 29 Jan 2024 11:35:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706556930; x=1707161730; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d58A86vZ58W9GlIkgHFHlo47r/EdM3AnN+I4lpZG9DM=; b=aHENsEAYX+KY2gXJzGvB7GJ/j0cPo6DQVuoJc/g2ia6clV5sNc3fPb956nHGHo2bFi F43g/cRg2jGOYfYqddbbdaxwt/ST/TEZYsd1TVxxTcNqoh86MXovk4zoGEZ7F2Q3IUFx JxSRU1lndH4pqFgZ8lxH1x1aR5/+IhUO4DLk6395/xVQ/ar8dBpEbmypAKUCRVqg13z3 Rz2Si6GeoGLgGY6k0j01E+hXHnST8DS3ZDdwUZpkDDuWT+jyX4myN2h18WyT7K6oVwwO ruPV1+vtnnfiUDGDZDyAENs4HbFLpgmUv99ZOlmcu1nhD1CMZBgIqR4+MR1rHewOTV2n BYTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706556930; x=1707161730; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d58A86vZ58W9GlIkgHFHlo47r/EdM3AnN+I4lpZG9DM=; b=Vk7LZeNmfQ55bcLMKNt+dAMYks98ruxw7zvEP1Ujmzp3/2GKLYB4ZOKke9qJJ3u86f OXJ0RfVW9uP7AK0Vr6g6gijLPIjNeWuPD4B9FfLDABM7mmmq8DvGkNey6TQL2k+5c50s nCYHotHHFJb/DVCji0uETgR4An2NrdKDeEMiqQywFWmz3ep6Vlx8HCNUyrdbNrYf32Dp BwLAk44ePz7ugQaDCmIDYrXBSRdfc1zfmryBGq/Rs/rztlG7/9k1Ae4ePZJFvLh5XT/+ D3x8Yf5nbc+gnQC/GP3R0UOmjPWrLKY0cFDd3WJaJFYxVHE+vGhXhB64WzdtS7zSSaft DJ5w== X-Gm-Message-State: AOJu0YxhR6WmDKY09ZSgJhNgxAxinT/S2qE6sFmJYheO6G+me3hXNfHc RbSCOh+2LiiC5Dxhu0zzCZTjUGicOLClH1gphK3JyrCkVtSm8ISPKpwNZ6TtLkqQPxRumlnoW7d 1xmqO1NgzYXys4QnSodyEZw== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:9b1d:f1ee:f750:93f1]) (user=lokeshgidra job=sendgmr) by 2002:a05:690c:f91:b0:5ff:80cb:23c7 with SMTP id df17-20020a05690c0f9100b005ff80cb23c7mr1505040ywb.1.1706556930561; Mon, 29 Jan 2024 11:35:30 -0800 (PST) Date: Mon, 29 Jan 2024 11:35:11 -0800 In-Reply-To: <20240129193512.123145-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> Mime-Version: 1.0 References: <20240129193512.123145-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240129193512.123145-3-lokeshgidra@google.com> Subject: [PATCH v2 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx From: Lokesh Gidra <lokeshgidra@google.com> To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org Content-Type: text/plain; charset="UTF-8" X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789454712271053811 X-GMAIL-MSGID: 1789454712271053811 |
Series |
per-vma locks in userfaultfd
|
|
Commit Message
Lokesh Gidra
Jan. 29, 2024, 7:35 p.m. UTC
Increments and loads to mmap_changing are always in mmap_lock
critical section. This ensures that if userspace requests event
notification for non-cooperative operations (e.g. mremap), userfaultfd
operations don't occur concurrently.
This can be achieved by using a separate read-write semaphore in
userfaultfd_ctx such that increments are done in write-mode and loads
in read-mode, thereby eliminating the dependency on mmap_lock for this
purpose.
This is a preparatory step before we replace mmap_lock usage with
per-vma locks in fill/move ioctls.
Signed-off-by: Lokesh Gidra <lokeshgidra@google.com>
---
fs/userfaultfd.c | 40 ++++++++++++----------
include/linux/userfaultfd_k.h | 31 ++++++++++--------
mm/userfaultfd.c | 62 ++++++++++++++++++++---------------
3 files changed, 75 insertions(+), 58 deletions(-)
Comments
* Lokesh Gidra <lokeshgidra@google.com> [240129 14:35]: > Increments and loads to mmap_changing are always in mmap_lock > critical section. Read or write? > This ensures that if userspace requests event > notification for non-cooperative operations (e.g. mremap), userfaultfd > operations don't occur concurrently. > > This can be achieved by using a separate read-write semaphore in > userfaultfd_ctx such that increments are done in write-mode and loads > in read-mode, thereby eliminating the dependency on mmap_lock for this > purpose. > > This is a preparatory step before we replace mmap_lock usage with > per-vma locks in fill/move ioctls. > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > --- > fs/userfaultfd.c | 40 ++++++++++++---------- > include/linux/userfaultfd_k.h | 31 ++++++++++-------- > mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- > 3 files changed, 75 insertions(+), 58 deletions(-) > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > index 58331b83d648..c00a021bcce4 100644 > --- a/fs/userfaultfd.c > +++ b/fs/userfaultfd.c > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > ctx->flags = octx->flags; > ctx->features = octx->features; > ctx->released = false; > + init_rwsem(&ctx->map_changing_lock); > atomic_set(&ctx->mmap_changing, 0); > ctx->mm = vma->vm_mm; > mmgrab(ctx->mm); > > userfaultfd_ctx_get(octx); > + down_write(&octx->map_changing_lock); > atomic_inc(&octx->mmap_changing); > + up_write(&octx->map_changing_lock); This can potentially hold up your writer as the readers execute. I think this will change your priority (ie: priority inversion)? You could use the first bit of the atomic_inc as indication of a write. So if the mmap_changing is even, then there are no writers. If it didn't change and it's even then you know no modification has happened (or it overflowed and hit the same number which would be rare, but maybe okay?). > fctx->orig = octx; > fctx->new = ctx; > list_add_tail(&fctx->list, fcs); > @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, > if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { > vm_ctx->ctx = ctx; > userfaultfd_ctx_get(ctx); > + down_write(&ctx->map_changing_lock); > atomic_inc(&ctx->mmap_changing); > + up_write(&ctx->map_changing_lock); > } else { > /* Drop uffd context if remap feature not enabled */ > vma_start_write(vma); > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > return true; > > userfaultfd_ctx_get(ctx); > + down_write(&ctx->map_changing_lock); > atomic_inc(&ctx->mmap_changing); > + up_write(&ctx->map_changing_lock); > mmap_read_unlock(mm); > > msg_init(&ewq.msg); > @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, > return -ENOMEM; > > userfaultfd_ctx_get(ctx); > + down_write(&ctx->map_changing_lock); > atomic_inc(&ctx->mmap_changing); > + up_write(&ctx->map_changing_lock); > unmap_ctx->ctx = ctx; > unmap_ctx->start = start; > unmap_ctx->end = end; > @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, > if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) > flags |= MFILL_ATOMIC_WP; > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, > - uffdio_copy.len, &ctx->mmap_changing, > - flags); > + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, > + uffdio_copy.len, flags); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, > goto out; > > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, > - uffdio_zeropage.range.len, > - &ctx->mmap_changing); > + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, > + uffdio_zeropage.range.len); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, > return -EINVAL; > > if (mmget_not_zero(ctx->mm)) { > - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, > - uffdio_wp.range.len, mode_wp, > - &ctx->mmap_changing); > + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, > + uffdio_wp.range.len, mode_wp); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) > flags |= MFILL_ATOMIC_WP; > > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, > - uffdio_continue.range.len, > - &ctx->mmap_changing, flags); > + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, > + uffdio_continue.range.len, flags); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long > goto out; > > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, > - uffdio_poison.range.len, > - &ctx->mmap_changing, 0); > + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, > + uffdio_poison.range.len, 0); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, > if (mmget_not_zero(mm)) { > mmap_read_lock(mm); > > - /* Re-check after taking mmap_lock */ > + /* Re-check after taking map_changing_lock */ > + down_read(&ctx->map_changing_lock); > if (likely(!atomic_read(&ctx->mmap_changing))) > ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, > uffdio_move.len, uffdio_move.mode); > else > ret = -EAGAIN; > - > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(mm); > mmput(mm); > } else { > @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) > ctx->flags = flags; > ctx->features = 0; > ctx->released = false; > + init_rwsem(&ctx->map_changing_lock); > atomic_set(&ctx->mmap_changing, 0); > ctx->mm = current->mm; > /* prevent the mm struct to be freed */ > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > index 691d928ee864..3210c3552976 100644 > --- a/include/linux/userfaultfd_k.h > +++ b/include/linux/userfaultfd_k.h > @@ -69,6 +69,13 @@ struct userfaultfd_ctx { > unsigned int features; > /* released */ > bool released; > + /* > + * Prevents userfaultfd operations (fill/move/wp) from happening while > + * some non-cooperative event(s) is taking place. Increments are done > + * in write-mode. Whereas, userfaultfd operations, which includes > + * reading mmap_changing, is done under read-mode. > + */ > + struct rw_semaphore map_changing_lock; > /* memory mappings are changing because of non-cooperative event */ > atomic_t mmap_changing; > /* mm with one ore more vmas attached to this userfaultfd_ctx */ > @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, > unsigned long dst_addr, struct page *page, > bool newly_allocated, uffd_flags_t flags); > > -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, > +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, > unsigned long src_start, unsigned long len, > - atomic_t *mmap_changing, uffd_flags_t flags); > -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, > + uffd_flags_t flags); > +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, > unsigned long dst_start, > - unsigned long len, > - atomic_t *mmap_changing); > -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags); > -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags); > -extern int mwriteprotect_range(struct mm_struct *dst_mm, > - unsigned long start, unsigned long len, > - bool enable_wp, atomic_t *mmap_changing); > + unsigned long len); > +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, > + unsigned long len, uffd_flags_t flags); > +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, uffd_flags_t flags); > +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, bool enable_wp); > extern long uffd_wp_range(struct vm_area_struct *vma, > unsigned long start, unsigned long len, bool enable_wp); > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index e3a91871462a..6e2ca04ab04d 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) > * called with mmap_lock held, it will release mmap_lock before returning. > */ > static __always_inline ssize_t mfill_atomic_hugetlb( > + struct userfaultfd_ctx *ctx, > struct vm_area_struct *dst_vma, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - atomic_t *mmap_changing, > uffd_flags_t flags) > { > struct mm_struct *dst_mm = dst_vma->vm_mm; > @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > * feature is not supported. > */ > if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > return -EINVAL; > } > @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > cond_resched(); > > if (unlikely(err == -ENOENT)) { > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > BUG_ON(!folio); > > @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > goto out; > } > mmap_read_lock(dst_mm); > + down_read(&ctx->map_changing_lock); > /* > * If memory mappings are changing because of non-cooperative > * operation (e.g. mremap) running in parallel, bail out and > * request the user to retry later > */ > - if (mmap_changing && atomic_read(mmap_changing)) { > + if (atomic_read(ctx->mmap_changing)) { > err = -EAGAIN; > break; > } > @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > } > > out_unlock: > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > out: > if (folio) > @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > } > #else /* !CONFIG_HUGETLB_PAGE */ > /* fail at build time if gcc attempts to use this */ > -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, > +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, > + struct vm_area_struct *dst_vma, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - atomic_t *mmap_changing, > uffd_flags_t flags); > #endif /* CONFIG_HUGETLB_PAGE */ > > @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, > return err; > } > > -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - atomic_t *mmap_changing, > uffd_flags_t flags) > { > + struct mm_struct *dst_mm = ctx->mm; > struct vm_area_struct *dst_vma; > ssize_t err; > pmd_t *dst_pmd; > @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > * operation (e.g. mremap) running in parallel, bail out and > * request the user to retry later > */ > + down_read(&ctx->map_changing_lock); > err = -EAGAIN; > - if (mmap_changing && atomic_read(mmap_changing)) > + if (atomic_read(&ctx->mmap_changing)) > goto out_unlock; > > /* > @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > * If this is a HUGETLB vma, pass off to appropriate routine > */ > if (is_vm_hugetlb_page(dst_vma)) > - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, > - len, mmap_changing, flags); > + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, > + src_start, len, flags); > > if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) > goto out_unlock; > @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > if (unlikely(err == -ENOENT)) { > void *kaddr; > > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > BUG_ON(!folio); > > @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > } > > out_unlock: > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > out: > if (folio) > @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > return copied ? copied : err; > } > > -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, > +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, > unsigned long src_start, unsigned long len, > - atomic_t *mmap_changing, uffd_flags_t flags) > + uffd_flags_t flags) > { > - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, > + return mfill_atomic(ctx, dst_start, src_start, len, > uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); > } > > -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing) > +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, > + unsigned long start, > + unsigned long len) > { > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > + return mfill_atomic(ctx, start, 0, len, > uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); > } > > -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags) > +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, uffd_flags_t flags) > { > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > + return mfill_atomic(ctx, start, 0, len, > uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); > } > > -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags) > +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, uffd_flags_t flags) > { > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > + return mfill_atomic(ctx, start, 0, len, > uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); > } > > @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, > return ret; > } > > -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, bool enable_wp, > - atomic_t *mmap_changing) > +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, bool enable_wp) > { > + struct mm_struct *dst_mm = ctx->mm; > unsigned long end = start + len; > unsigned long _start, _end; > struct vm_area_struct *dst_vma; > @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > * operation (e.g. mremap) running in parallel, bail out and > * request the user to retry later > */ > + down_read(&ctx->map_changing_lock); > err = -EAGAIN; > - if (mmap_changing && atomic_read(mmap_changing)) > + if (atomic_read(&ctx->mmap_changing)) > goto out_unlock; > > err = -ENOENT; > @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > err = 0; > } > out_unlock: > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > return err; > } > -- > 2.43.0.429.g432eaa2c6b-goog > >
On Mon, Jan 29, 2024 at 1:00 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > * Lokesh Gidra <lokeshgidra@google.com> [240129 14:35]: > > Increments and loads to mmap_changing are always in mmap_lock > > critical section. > > Read or write? > It's write-mode when incrementing (except in case of userfaultfd_remove() where it's done in read-mode) and loads are in mmap_lock (read-mode). I'll clarify this in the next version. > > > This ensures that if userspace requests event > > notification for non-cooperative operations (e.g. mremap), userfaultfd > > operations don't occur concurrently. > > > > This can be achieved by using a separate read-write semaphore in > > userfaultfd_ctx such that increments are done in write-mode and loads > > in read-mode, thereby eliminating the dependency on mmap_lock for this > > purpose. > > > > This is a preparatory step before we replace mmap_lock usage with > > per-vma locks in fill/move ioctls. > > > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > > --- > > fs/userfaultfd.c | 40 ++++++++++++---------- > > include/linux/userfaultfd_k.h | 31 ++++++++++-------- > > mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- > > 3 files changed, 75 insertions(+), 58 deletions(-) > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > index 58331b83d648..c00a021bcce4 100644 > > --- a/fs/userfaultfd.c > > +++ b/fs/userfaultfd.c > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > ctx->flags = octx->flags; > > ctx->features = octx->features; > > ctx->released = false; > > + init_rwsem(&ctx->map_changing_lock); > > atomic_set(&ctx->mmap_changing, 0); > > ctx->mm = vma->vm_mm; > > mmgrab(ctx->mm); > > > > userfaultfd_ctx_get(octx); > > + down_write(&octx->map_changing_lock); > > atomic_inc(&octx->mmap_changing); > > + up_write(&octx->map_changing_lock); > > This can potentially hold up your writer as the readers execute. I > think this will change your priority (ie: priority inversion)? Priority inversion, if any, is already happening due to mmap_lock, no? Also, I thought rw_semaphore implementation is fair, so the writer will eventually get the lock right? Please correct me if I'm wrong. At this patch: there can't be any readers as they need to acquire mmap_lock in read-mode first. While writers, at the point of incrementing mmap_changing, already hold mmap_lock in write-mode. With per-vma locks, the same synchronization that mmap_lock achieved around mmap_changing, will be achieved by ctx->map_changing_lock. > > You could use the first bit of the atomic_inc as indication of a write. > So if the mmap_changing is even, then there are no writers. If it > didn't change and it's even then you know no modification has happened > (or it overflowed and hit the same number which would be rare, but > maybe okay?). This is already achievable, right? If mmap_changing is >0 then we know there are writers. The problem is that we want writers (like mremap operations) to block as long as there is a userfaultfd operation (also reader of mmap_changing) going on. Please note that I'm inferring this from current implementation. AFAIU, mmap_changing isn't required for correctness, because all operations are happening under the right mode of mmap_lock. It's used to ensure that while a non-cooperative operations is happening, if the user has asked it to be notified, then no other userfaultfd operations should take place until the user gets the event notification. > > > fctx->orig = octx; > > fctx->new = ctx; > > list_add_tail(&fctx->list, fcs); > > @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, > > if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { > > vm_ctx->ctx = ctx; > > userfaultfd_ctx_get(ctx); > > + down_write(&ctx->map_changing_lock); > > atomic_inc(&ctx->mmap_changing); > > + up_write(&ctx->map_changing_lock); > > } else { > > /* Drop uffd context if remap feature not enabled */ > > vma_start_write(vma); > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > return true; > > > > userfaultfd_ctx_get(ctx); > > + down_write(&ctx->map_changing_lock); > > atomic_inc(&ctx->mmap_changing); > > + up_write(&ctx->map_changing_lock); > > mmap_read_unlock(mm); > > > > msg_init(&ewq.msg); > > @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, > > return -ENOMEM; > > > > userfaultfd_ctx_get(ctx); > > + down_write(&ctx->map_changing_lock); > > atomic_inc(&ctx->mmap_changing); > > + up_write(&ctx->map_changing_lock); > > unmap_ctx->ctx = ctx; > > unmap_ctx->start = start; > > unmap_ctx->end = end; > > @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, > > if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) > > flags |= MFILL_ATOMIC_WP; > > if (mmget_not_zero(ctx->mm)) { > > - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, > > - uffdio_copy.len, &ctx->mmap_changing, > > - flags); > > + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, > > + uffdio_copy.len, flags); > > mmput(ctx->mm); > > } else { > > return -ESRCH; > > @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, > > goto out; > > > > if (mmget_not_zero(ctx->mm)) { > > - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, > > - uffdio_zeropage.range.len, > > - &ctx->mmap_changing); > > + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, > > + uffdio_zeropage.range.len); > > mmput(ctx->mm); > > } else { > > return -ESRCH; > > @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, > > return -EINVAL; > > > > if (mmget_not_zero(ctx->mm)) { > > - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, > > - uffdio_wp.range.len, mode_wp, > > - &ctx->mmap_changing); > > + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, > > + uffdio_wp.range.len, mode_wp); > > mmput(ctx->mm); > > } else { > > return -ESRCH; > > @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) > > flags |= MFILL_ATOMIC_WP; > > > > if (mmget_not_zero(ctx->mm)) { > > - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, > > - uffdio_continue.range.len, > > - &ctx->mmap_changing, flags); > > + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, > > + uffdio_continue.range.len, flags); > > mmput(ctx->mm); > > } else { > > return -ESRCH; > > @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long > > goto out; > > > > if (mmget_not_zero(ctx->mm)) { > > - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, > > - uffdio_poison.range.len, > > - &ctx->mmap_changing, 0); > > + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, > > + uffdio_poison.range.len, 0); > > mmput(ctx->mm); > > } else { > > return -ESRCH; > > @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, > > if (mmget_not_zero(mm)) { > > mmap_read_lock(mm); > > > > - /* Re-check after taking mmap_lock */ > > + /* Re-check after taking map_changing_lock */ > > + down_read(&ctx->map_changing_lock); > > if (likely(!atomic_read(&ctx->mmap_changing))) > > ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, > > uffdio_move.len, uffdio_move.mode); > > else > > ret = -EAGAIN; > > - > > + up_read(&ctx->map_changing_lock); > > mmap_read_unlock(mm); > > mmput(mm); > > } else { > > @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) > > ctx->flags = flags; > > ctx->features = 0; > > ctx->released = false; > > + init_rwsem(&ctx->map_changing_lock); > > atomic_set(&ctx->mmap_changing, 0); > > ctx->mm = current->mm; > > /* prevent the mm struct to be freed */ > > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > > index 691d928ee864..3210c3552976 100644 > > --- a/include/linux/userfaultfd_k.h > > +++ b/include/linux/userfaultfd_k.h > > @@ -69,6 +69,13 @@ struct userfaultfd_ctx { > > unsigned int features; > > /* released */ > > bool released; > > + /* > > + * Prevents userfaultfd operations (fill/move/wp) from happening while > > + * some non-cooperative event(s) is taking place. Increments are done > > + * in write-mode. Whereas, userfaultfd operations, which includes > > + * reading mmap_changing, is done under read-mode. > > + */ > > + struct rw_semaphore map_changing_lock; > > /* memory mappings are changing because of non-cooperative event */ > > atomic_t mmap_changing; > > /* mm with one ore more vmas attached to this userfaultfd_ctx */ > > @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, > > unsigned long dst_addr, struct page *page, > > bool newly_allocated, uffd_flags_t flags); > > > > -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, > > +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, > > unsigned long src_start, unsigned long len, > > - atomic_t *mmap_changing, uffd_flags_t flags); > > -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, > > + uffd_flags_t flags); > > +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, > > unsigned long dst_start, > > - unsigned long len, > > - atomic_t *mmap_changing); > > -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, > > - unsigned long len, atomic_t *mmap_changing, > > - uffd_flags_t flags); > > -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, > > - unsigned long len, atomic_t *mmap_changing, > > - uffd_flags_t flags); > > -extern int mwriteprotect_range(struct mm_struct *dst_mm, > > - unsigned long start, unsigned long len, > > - bool enable_wp, atomic_t *mmap_changing); > > + unsigned long len); > > +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, > > + unsigned long len, uffd_flags_t flags); > > +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, > > + unsigned long len, uffd_flags_t flags); > > +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, > > + unsigned long len, bool enable_wp); > > extern long uffd_wp_range(struct vm_area_struct *vma, > > unsigned long start, unsigned long len, bool enable_wp); > > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > index e3a91871462a..6e2ca04ab04d 100644 > > --- a/mm/userfaultfd.c > > +++ b/mm/userfaultfd.c > > @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) > > * called with mmap_lock held, it will release mmap_lock before returning. > > */ > > static __always_inline ssize_t mfill_atomic_hugetlb( > > + struct userfaultfd_ctx *ctx, > > struct vm_area_struct *dst_vma, > > unsigned long dst_start, > > unsigned long src_start, > > unsigned long len, > > - atomic_t *mmap_changing, > > uffd_flags_t flags) > > { > > struct mm_struct *dst_mm = dst_vma->vm_mm; > > @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > * feature is not supported. > > */ > > if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { > > + up_read(&ctx->map_changing_lock); > > mmap_read_unlock(dst_mm); > > return -EINVAL; > > } > > @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > cond_resched(); > > > > if (unlikely(err == -ENOENT)) { > > + up_read(&ctx->map_changing_lock); > > mmap_read_unlock(dst_mm); > > BUG_ON(!folio); > > > > @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > goto out; > > } > > mmap_read_lock(dst_mm); > > + down_read(&ctx->map_changing_lock); > > /* > > * If memory mappings are changing because of non-cooperative > > * operation (e.g. mremap) running in parallel, bail out and > > * request the user to retry later > > */ > > - if (mmap_changing && atomic_read(mmap_changing)) { > > + if (atomic_read(ctx->mmap_changing)) { > > err = -EAGAIN; > > break; > > } > > @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > } > > > > out_unlock: > > + up_read(&ctx->map_changing_lock); > > mmap_read_unlock(dst_mm); > > out: > > if (folio) > > @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > } > > #else /* !CONFIG_HUGETLB_PAGE */ > > /* fail at build time if gcc attempts to use this */ > > -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, > > +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, > > + struct vm_area_struct *dst_vma, > > unsigned long dst_start, > > unsigned long src_start, > > unsigned long len, > > - atomic_t *mmap_changing, > > uffd_flags_t flags); > > #endif /* CONFIG_HUGETLB_PAGE */ > > > > @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, > > return err; > > } > > > > -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > > +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > unsigned long dst_start, > > unsigned long src_start, > > unsigned long len, > > - atomic_t *mmap_changing, > > uffd_flags_t flags) > > { > > + struct mm_struct *dst_mm = ctx->mm; > > struct vm_area_struct *dst_vma; > > ssize_t err; > > pmd_t *dst_pmd; > > @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > > * operation (e.g. mremap) running in parallel, bail out and > > * request the user to retry later > > */ > > + down_read(&ctx->map_changing_lock); > > err = -EAGAIN; > > - if (mmap_changing && atomic_read(mmap_changing)) > > + if (atomic_read(&ctx->mmap_changing)) > > goto out_unlock; > > > > /* > > @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > > * If this is a HUGETLB vma, pass off to appropriate routine > > */ > > if (is_vm_hugetlb_page(dst_vma)) > > - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, > > - len, mmap_changing, flags); > > + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, > > + src_start, len, flags); > > > > if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) > > goto out_unlock; > > @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > > if (unlikely(err == -ENOENT)) { > > void *kaddr; > > > > + up_read(&ctx->map_changing_lock); > > mmap_read_unlock(dst_mm); > > BUG_ON(!folio); > > > > @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > > } > > > > out_unlock: > > + up_read(&ctx->map_changing_lock); > > mmap_read_unlock(dst_mm); > > out: > > if (folio) > > @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > > return copied ? copied : err; > > } > > > > -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, > > +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, > > unsigned long src_start, unsigned long len, > > - atomic_t *mmap_changing, uffd_flags_t flags) > > + uffd_flags_t flags) > > { > > - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, > > + return mfill_atomic(ctx, dst_start, src_start, len, > > uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); > > } > > > > -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, > > - unsigned long len, atomic_t *mmap_changing) > > +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, > > + unsigned long start, > > + unsigned long len) > > { > > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > > + return mfill_atomic(ctx, start, 0, len, > > uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); > > } > > > > -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, > > - unsigned long len, atomic_t *mmap_changing, > > - uffd_flags_t flags) > > +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, > > + unsigned long len, uffd_flags_t flags) > > { > > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > > + return mfill_atomic(ctx, start, 0, len, > > uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); > > } > > > > -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, > > - unsigned long len, atomic_t *mmap_changing, > > - uffd_flags_t flags) > > +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, > > + unsigned long len, uffd_flags_t flags) > > { > > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > > + return mfill_atomic(ctx, start, 0, len, > > uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); > > } > > > > @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, > > return ret; > > } > > > > -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > > - unsigned long len, bool enable_wp, > > - atomic_t *mmap_changing) > > +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, > > + unsigned long len, bool enable_wp) > > { > > + struct mm_struct *dst_mm = ctx->mm; > > unsigned long end = start + len; > > unsigned long _start, _end; > > struct vm_area_struct *dst_vma; > > @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > > * operation (e.g. mremap) running in parallel, bail out and > > * request the user to retry later > > */ > > + down_read(&ctx->map_changing_lock); > > err = -EAGAIN; > > - if (mmap_changing && atomic_read(mmap_changing)) > > + if (atomic_read(&ctx->mmap_changing)) > > goto out_unlock; > > > > err = -ENOENT; > > @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > > err = 0; > > } > > out_unlock: > > + up_read(&ctx->map_changing_lock); > > mmap_read_unlock(dst_mm); > > return err; > > } > > -- > > 2.43.0.429.g432eaa2c6b-goog > > > >
* Lokesh Gidra <lokeshgidra@google.com> [240129 17:35]: > On Mon, Jan 29, 2024 at 1:00 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > > > * Lokesh Gidra <lokeshgidra@google.com> [240129 14:35]: > > > Increments and loads to mmap_changing are always in mmap_lock > > > critical section. > > > > Read or write? > > > It's write-mode when incrementing (except in case of > userfaultfd_remove() where it's done in read-mode) and loads are in > mmap_lock (read-mode). I'll clarify this in the next version. > > > > > This ensures that if userspace requests event > > > notification for non-cooperative operations (e.g. mremap), userfaultfd > > > operations don't occur concurrently. > > > > > > This can be achieved by using a separate read-write semaphore in > > > userfaultfd_ctx such that increments are done in write-mode and loads > > > in read-mode, thereby eliminating the dependency on mmap_lock for this > > > purpose. > > > > > > This is a preparatory step before we replace mmap_lock usage with > > > per-vma locks in fill/move ioctls. > > > > > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > > > --- > > > fs/userfaultfd.c | 40 ++++++++++++---------- > > > include/linux/userfaultfd_k.h | 31 ++++++++++-------- > > > mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- > > > 3 files changed, 75 insertions(+), 58 deletions(-) > > > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > > index 58331b83d648..c00a021bcce4 100644 > > > --- a/fs/userfaultfd.c > > > +++ b/fs/userfaultfd.c > > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > > ctx->flags = octx->flags; > > > ctx->features = octx->features; > > > ctx->released = false; > > > + init_rwsem(&ctx->map_changing_lock); > > > atomic_set(&ctx->mmap_changing, 0); > > > ctx->mm = vma->vm_mm; > > > mmgrab(ctx->mm); > > > > > > userfaultfd_ctx_get(octx); > > > + down_write(&octx->map_changing_lock); > > > atomic_inc(&octx->mmap_changing); > > > + up_write(&octx->map_changing_lock); On init, I don't think taking the lock is strictly necessary - unless there is a way to access it before this increment? Not that it would cost much. > > > > This can potentially hold up your writer as the readers execute. I > > think this will change your priority (ie: priority inversion)? > > Priority inversion, if any, is already happening due to mmap_lock, no? > Also, I thought rw_semaphore implementation is fair, so the writer > will eventually get the lock right? Please correct me if I'm wrong. You are correct. Any writer will stop any new readers, but readers currently in the section must finish before the writer. > > At this patch: there can't be any readers as they need to acquire > mmap_lock in read-mode first. While writers, at the point of > incrementing mmap_changing, already hold mmap_lock in write-mode. > > With per-vma locks, the same synchronization that mmap_lock achieved > around mmap_changing, will be achieved by ctx->map_changing_lock. The inversion I was thinking was that the writer cannot complete the write until the reader is done failing because the atomic_inc has happened..? I see the writer as a priority since readers cannot complete within the write, but I read it wrong. I think the readers are fine if the happen before, during, or after a write. The work is thrown out if the reader happens during the transition between those states, which is detected through the atomic. This makes sense now. > > > > You could use the first bit of the atomic_inc as indication of a write. > > So if the mmap_changing is even, then there are no writers. If it > > didn't change and it's even then you know no modification has happened > > (or it overflowed and hit the same number which would be rare, but > > maybe okay?). > > This is already achievable, right? If mmap_changing is >0 then we know > there are writers. The problem is that we want writers (like mremap > operations) to block as long as there is a userfaultfd operation (also > reader of mmap_changing) going on. Please note that I'm inferring this > from current implementation. > > AFAIU, mmap_changing isn't required for correctness, because all > operations are happening under the right mode of mmap_lock. It's used > to ensure that while a non-cooperative operations is happening, if the > user has asked it to be notified, then no other userfaultfd operations > should take place until the user gets the event notification. I think it is needed, mmap_changing is read before the mmap_lock is taken, then compared after the mmap_lock is taken (both read mode) to ensure nothing has changed. .. > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > return true; > > > > > > userfaultfd_ctx_get(ctx); > > > + down_write(&ctx->map_changing_lock); > > > atomic_inc(&ctx->mmap_changing); > > > + up_write(&ctx->map_changing_lock); > > > mmap_read_unlock(mm); > > > > > > msg_init(&ewq.msg); If this happens in read mode, then why are you waiting for the readers to leave? Can't you just increment the atomic? It's fine happening in read mode today, so it should be fine with this new rwsem. Thanks, Liam ..
On Mon, Jan 29, 2024 at 11:35:11AM -0800, Lokesh Gidra wrote: > Increments and loads to mmap_changing are always in mmap_lock > critical section. This ensures that if userspace requests event > notification for non-cooperative operations (e.g. mremap), userfaultfd > operations don't occur concurrently. > > This can be achieved by using a separate read-write semaphore in > userfaultfd_ctx such that increments are done in write-mode and loads > in read-mode, thereby eliminating the dependency on mmap_lock for this > purpose. > > This is a preparatory step before we replace mmap_lock usage with > per-vma locks in fill/move ioctls. > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > --- > fs/userfaultfd.c | 40 ++++++++++++---------- > include/linux/userfaultfd_k.h | 31 ++++++++++-------- > mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- > 3 files changed, 75 insertions(+), 58 deletions(-) > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > index 58331b83d648..c00a021bcce4 100644 > --- a/fs/userfaultfd.c > +++ b/fs/userfaultfd.c > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > ctx->flags = octx->flags; > ctx->features = octx->features; > ctx->released = false; > + init_rwsem(&ctx->map_changing_lock); > atomic_set(&ctx->mmap_changing, 0); > ctx->mm = vma->vm_mm; > mmgrab(ctx->mm); > > userfaultfd_ctx_get(octx); > + down_write(&octx->map_changing_lock); > atomic_inc(&octx->mmap_changing); > + up_write(&octx->map_changing_lock); > fctx->orig = octx; > fctx->new = ctx; > list_add_tail(&fctx->list, fcs); > @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, > if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { > vm_ctx->ctx = ctx; > userfaultfd_ctx_get(ctx); > + down_write(&ctx->map_changing_lock); > atomic_inc(&ctx->mmap_changing); > + up_write(&ctx->map_changing_lock); > } else { > /* Drop uffd context if remap feature not enabled */ > vma_start_write(vma); > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > return true; > > userfaultfd_ctx_get(ctx); > + down_write(&ctx->map_changing_lock); > atomic_inc(&ctx->mmap_changing); > + up_write(&ctx->map_changing_lock); > mmap_read_unlock(mm); > > msg_init(&ewq.msg); > @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, > return -ENOMEM; > > userfaultfd_ctx_get(ctx); > + down_write(&ctx->map_changing_lock); > atomic_inc(&ctx->mmap_changing); > + up_write(&ctx->map_changing_lock); > unmap_ctx->ctx = ctx; > unmap_ctx->start = start; > unmap_ctx->end = end; > @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, > if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) > flags |= MFILL_ATOMIC_WP; > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, > - uffdio_copy.len, &ctx->mmap_changing, > - flags); > + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, > + uffdio_copy.len, flags); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, > goto out; > > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, > - uffdio_zeropage.range.len, > - &ctx->mmap_changing); > + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, > + uffdio_zeropage.range.len); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, > return -EINVAL; > > if (mmget_not_zero(ctx->mm)) { > - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, > - uffdio_wp.range.len, mode_wp, > - &ctx->mmap_changing); > + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, > + uffdio_wp.range.len, mode_wp); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) > flags |= MFILL_ATOMIC_WP; > > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, > - uffdio_continue.range.len, > - &ctx->mmap_changing, flags); > + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, > + uffdio_continue.range.len, flags); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long > goto out; > > if (mmget_not_zero(ctx->mm)) { > - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, > - uffdio_poison.range.len, > - &ctx->mmap_changing, 0); > + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, > + uffdio_poison.range.len, 0); > mmput(ctx->mm); > } else { > return -ESRCH; > @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, > if (mmget_not_zero(mm)) { > mmap_read_lock(mm); > > - /* Re-check after taking mmap_lock */ > + /* Re-check after taking map_changing_lock */ > + down_read(&ctx->map_changing_lock); > if (likely(!atomic_read(&ctx->mmap_changing))) > ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, > uffdio_move.len, uffdio_move.mode); > else > ret = -EAGAIN; > - > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(mm); > mmput(mm); > } else { > @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) > ctx->flags = flags; > ctx->features = 0; > ctx->released = false; > + init_rwsem(&ctx->map_changing_lock); > atomic_set(&ctx->mmap_changing, 0); > ctx->mm = current->mm; > /* prevent the mm struct to be freed */ > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > index 691d928ee864..3210c3552976 100644 > --- a/include/linux/userfaultfd_k.h > +++ b/include/linux/userfaultfd_k.h > @@ -69,6 +69,13 @@ struct userfaultfd_ctx { > unsigned int features; > /* released */ > bool released; > + /* > + * Prevents userfaultfd operations (fill/move/wp) from happening while > + * some non-cooperative event(s) is taking place. Increments are done > + * in write-mode. Whereas, userfaultfd operations, which includes > + * reading mmap_changing, is done under read-mode. > + */ > + struct rw_semaphore map_changing_lock; > /* memory mappings are changing because of non-cooperative event */ > atomic_t mmap_changing; > /* mm with one ore more vmas attached to this userfaultfd_ctx */ > @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, > unsigned long dst_addr, struct page *page, > bool newly_allocated, uffd_flags_t flags); > > -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, > +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, > unsigned long src_start, unsigned long len, > - atomic_t *mmap_changing, uffd_flags_t flags); > -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, > + uffd_flags_t flags); > +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, > unsigned long dst_start, > - unsigned long len, > - atomic_t *mmap_changing); > -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags); > -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags); > -extern int mwriteprotect_range(struct mm_struct *dst_mm, > - unsigned long start, unsigned long len, > - bool enable_wp, atomic_t *mmap_changing); > + unsigned long len); > +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, > + unsigned long len, uffd_flags_t flags); > +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, uffd_flags_t flags); > +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, bool enable_wp); > extern long uffd_wp_range(struct vm_area_struct *vma, > unsigned long start, unsigned long len, bool enable_wp); > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index e3a91871462a..6e2ca04ab04d 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) > * called with mmap_lock held, it will release mmap_lock before returning. > */ > static __always_inline ssize_t mfill_atomic_hugetlb( > + struct userfaultfd_ctx *ctx, > struct vm_area_struct *dst_vma, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - atomic_t *mmap_changing, > uffd_flags_t flags) > { > struct mm_struct *dst_mm = dst_vma->vm_mm; > @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > * feature is not supported. > */ > if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > return -EINVAL; > } > @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > cond_resched(); > > if (unlikely(err == -ENOENT)) { > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > BUG_ON(!folio); > > @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > goto out; > } > mmap_read_lock(dst_mm); > + down_read(&ctx->map_changing_lock); > /* > * If memory mappings are changing because of non-cooperative > * operation (e.g. mremap) running in parallel, bail out and > * request the user to retry later > */ > - if (mmap_changing && atomic_read(mmap_changing)) { > + if (atomic_read(ctx->mmap_changing)) { > err = -EAGAIN; > break; > } > @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > } > > out_unlock: > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > out: > if (folio) > @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > } > #else /* !CONFIG_HUGETLB_PAGE */ > /* fail at build time if gcc attempts to use this */ > -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, > +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, > + struct vm_area_struct *dst_vma, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - atomic_t *mmap_changing, > uffd_flags_t flags); > #endif /* CONFIG_HUGETLB_PAGE */ > > @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, > return err; > } > > -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - atomic_t *mmap_changing, > uffd_flags_t flags) > { > + struct mm_struct *dst_mm = ctx->mm; > struct vm_area_struct *dst_vma; > ssize_t err; > pmd_t *dst_pmd; > @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > * operation (e.g. mremap) running in parallel, bail out and > * request the user to retry later > */ > + down_read(&ctx->map_changing_lock); > err = -EAGAIN; > - if (mmap_changing && atomic_read(mmap_changing)) > + if (atomic_read(&ctx->mmap_changing)) > goto out_unlock; > > /* > @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > * If this is a HUGETLB vma, pass off to appropriate routine > */ > if (is_vm_hugetlb_page(dst_vma)) > - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, > - len, mmap_changing, flags); > + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, > + src_start, len, flags); > > if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) > goto out_unlock; > @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > if (unlikely(err == -ENOENT)) { > void *kaddr; > > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > BUG_ON(!folio); > > @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > } > > out_unlock: > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > out: > if (folio) > @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, > return copied ? copied : err; > } > > -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, > +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, > unsigned long src_start, unsigned long len, > - atomic_t *mmap_changing, uffd_flags_t flags) > + uffd_flags_t flags) > { > - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, > + return mfill_atomic(ctx, dst_start, src_start, len, > uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); > } > > -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing) > +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, > + unsigned long start, > + unsigned long len) > { > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > + return mfill_atomic(ctx, start, 0, len, > uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); > } > > -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags) > +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, uffd_flags_t flags) > { > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > + return mfill_atomic(ctx, start, 0, len, > uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); > } > > -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, atomic_t *mmap_changing, > - uffd_flags_t flags) > +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, uffd_flags_t flags) > { > - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, > + return mfill_atomic(ctx, start, 0, len, > uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); > } > > @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, > return ret; > } > > -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > - unsigned long len, bool enable_wp, > - atomic_t *mmap_changing) > +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, > + unsigned long len, bool enable_wp) > { > + struct mm_struct *dst_mm = ctx->mm; > unsigned long end = start + len; > unsigned long _start, _end; > struct vm_area_struct *dst_vma; > @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > * operation (e.g. mremap) running in parallel, bail out and > * request the user to retry later > */ > + down_read(&ctx->map_changing_lock); > err = -EAGAIN; > - if (mmap_changing && atomic_read(mmap_changing)) > + if (atomic_read(&ctx->mmap_changing)) > goto out_unlock; > > err = -ENOENT; > @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, > err = 0; > } > out_unlock: > + up_read(&ctx->map_changing_lock); > mmap_read_unlock(dst_mm); > return err; > } > -- > 2.43.0.429.g432eaa2c6b-goog >
On Mon, Jan 29, 2024 at 10:46:27PM -0500, Liam R. Howlett wrote: > * Lokesh Gidra <lokeshgidra@google.com> [240129 17:35]: > > On Mon, Jan 29, 2024 at 1:00 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > > > > > * Lokesh Gidra <lokeshgidra@google.com> [240129 14:35]: > > > > Increments and loads to mmap_changing are always in mmap_lock > > > > critical section. > > > > > > Read or write? > > > > > It's write-mode when incrementing (except in case of > > userfaultfd_remove() where it's done in read-mode) and loads are in > > mmap_lock (read-mode). I'll clarify this in the next version. > > > > > > > This ensures that if userspace requests event > > > > notification for non-cooperative operations (e.g. mremap), userfaultfd > > > > operations don't occur concurrently. > > > > > > > > This can be achieved by using a separate read-write semaphore in > > > > userfaultfd_ctx such that increments are done in write-mode and loads > > > > in read-mode, thereby eliminating the dependency on mmap_lock for this > > > > purpose. > > > > > > > > This is a preparatory step before we replace mmap_lock usage with > > > > per-vma locks in fill/move ioctls. > > > > > > > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > > > > --- > > > > fs/userfaultfd.c | 40 ++++++++++++---------- > > > > include/linux/userfaultfd_k.h | 31 ++++++++++-------- > > > > mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- > > > > 3 files changed, 75 insertions(+), 58 deletions(-) > > > > > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > > > index 58331b83d648..c00a021bcce4 100644 > > > > --- a/fs/userfaultfd.c > > > > +++ b/fs/userfaultfd.c > > > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > > > ctx->flags = octx->flags; > > > > ctx->features = octx->features; > > > > ctx->released = false; > > > > + init_rwsem(&ctx->map_changing_lock); > > > > atomic_set(&ctx->mmap_changing, 0); > > > > ctx->mm = vma->vm_mm; > > > > mmgrab(ctx->mm); > > > > > > > > userfaultfd_ctx_get(octx); > > > > + down_write(&octx->map_changing_lock); > > > > atomic_inc(&octx->mmap_changing); > > > > + up_write(&octx->map_changing_lock); > > On init, I don't think taking the lock is strictly necessary - unless > there is a way to access it before this increment? Not that it would > cost much. It's fork, the lock is for the context of the parent process and there could be uffdio ops running in parallel on its VM. > > > You could use the first bit of the atomic_inc as indication of a write. > > > So if the mmap_changing is even, then there are no writers. If it > > > didn't change and it's even then you know no modification has happened > > > (or it overflowed and hit the same number which would be rare, but > > > maybe okay?). > > > > This is already achievable, right? If mmap_changing is >0 then we know > > there are writers. The problem is that we want writers (like mremap > > operations) to block as long as there is a userfaultfd operation (also > > reader of mmap_changing) going on. Please note that I'm inferring this > > from current implementation. > > > > AFAIU, mmap_changing isn't required for correctness, because all > > operations are happening under the right mode of mmap_lock. It's used > > to ensure that while a non-cooperative operations is happening, if the > > user has asked it to be notified, then no other userfaultfd operations > > should take place until the user gets the event notification. > > I think it is needed, mmap_changing is read before the mmap_lock is > taken, then compared after the mmap_lock is taken (both read mode) to > ensure nothing has changed. mmap_changing is required to ensure that no uffdio operation runs in parallel with operations that modify the memory map, like fork, mremap, munmap and some of madvise calls. And we do need the writers to block if there is an uffdio operation going on, so I think an rwsem is the right way to protect mmap_chaniging. > > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > > return true; > > > > > > > > userfaultfd_ctx_get(ctx); > > > > + down_write(&ctx->map_changing_lock); > > > > atomic_inc(&ctx->mmap_changing); > > > > + up_write(&ctx->map_changing_lock); > > > > mmap_read_unlock(mm); > > > > > > > > msg_init(&ewq.msg); > > If this happens in read mode, then why are you waiting for the readers > to leave? Can't you just increment the atomic? It's fine happening in > read mode today, so it should be fine with this new rwsem. It's been a while and the details are blurred now, but if I remember correctly, having this in read mode forced non-cooperative uffd monitor to be single threaded. If a monitor runs, say uffdio_copy, and in parallel a thread in the monitored process does MADV_DONTNEED, the latter will wait for userfaultfd_remove notification to be processed in the monitor and drop the VMA contents only afterwards. If a non-cooperative monitor would process notification in parallel with uffdio ops, MADV_DONTNEED could continue and race with uffdio_copy, so read mode wouldn't be enough. There was no much sense to make MADV_DONTNEED take mmap_lock in write mode just for this, but now taking the rwsem in write mode here sounds reasonable. > Thanks, > Liam > > ...
* Mike Rapoport <rppt@kernel.org> [240130 03:55]: > On Mon, Jan 29, 2024 at 10:46:27PM -0500, Liam R. Howlett wrote: > > * Lokesh Gidra <lokeshgidra@google.com> [240129 17:35]: > > > On Mon, Jan 29, 2024 at 1:00 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > > > > > > > * Lokesh Gidra <lokeshgidra@google.com> [240129 14:35]: > > > > > Increments and loads to mmap_changing are always in mmap_lock > > > > > critical section. > > > > > > > > Read or write? > > > > > > > It's write-mode when incrementing (except in case of > > > userfaultfd_remove() where it's done in read-mode) and loads are in > > > mmap_lock (read-mode). I'll clarify this in the next version. > > > > > > > > > This ensures that if userspace requests event > > > > > notification for non-cooperative operations (e.g. mremap), userfaultfd > > > > > operations don't occur concurrently. > > > > > > > > > > This can be achieved by using a separate read-write semaphore in > > > > > userfaultfd_ctx such that increments are done in write-mode and loads > > > > > in read-mode, thereby eliminating the dependency on mmap_lock for this > > > > > purpose. > > > > > > > > > > This is a preparatory step before we replace mmap_lock usage with > > > > > per-vma locks in fill/move ioctls. > > > > > > > > > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > > > > > --- > > > > > fs/userfaultfd.c | 40 ++++++++++++---------- > > > > > include/linux/userfaultfd_k.h | 31 ++++++++++-------- > > > > > mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- > > > > > 3 files changed, 75 insertions(+), 58 deletions(-) > > > > > > > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > > > > index 58331b83d648..c00a021bcce4 100644 > > > > > --- a/fs/userfaultfd.c > > > > > +++ b/fs/userfaultfd.c > > > > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > > > > ctx->flags = octx->flags; > > > > > ctx->features = octx->features; > > > > > ctx->released = false; > > > > > + init_rwsem(&ctx->map_changing_lock); > > > > > atomic_set(&ctx->mmap_changing, 0); > > > > > ctx->mm = vma->vm_mm; > > > > > mmgrab(ctx->mm); > > > > > > > > > > userfaultfd_ctx_get(octx); > > > > > + down_write(&octx->map_changing_lock); > > > > > atomic_inc(&octx->mmap_changing); > > > > > + up_write(&octx->map_changing_lock); > > > > On init, I don't think taking the lock is strictly necessary - unless > > there is a way to access it before this increment? Not that it would > > cost much. > > It's fork, the lock is for the context of the parent process and there > could be uffdio ops running in parallel on its VM. Is this necessary then? We are getting the octx from another mm but the mm is locked for forking. Why does it matter if there are readers of the octx? I assume, currently, there is no way the userfaultfd ctx can be altered under mmap_lock held for writing. I would think it matters if there are writers (which, I presume are blocked by the mmap_lock for now?) Shouldn't we hold the write lock for the entire dup process, I mean, if we remove the userfaultfd from the mmap_lock, we cannot let the structure being duplicated change half way through the dup process? I must be missing something with where this is headed? > > > > > You could use the first bit of the atomic_inc as indication of a write. > > > > So if the mmap_changing is even, then there are no writers. If it > > > > didn't change and it's even then you know no modification has happened > > > > (or it overflowed and hit the same number which would be rare, but > > > > maybe okay?). > > > > > > This is already achievable, right? If mmap_changing is >0 then we know > > > there are writers. The problem is that we want writers (like mremap > > > operations) to block as long as there is a userfaultfd operation (also > > > reader of mmap_changing) going on. Please note that I'm inferring this > > > from current implementation. > > > > > > AFAIU, mmap_changing isn't required for correctness, because all > > > operations are happening under the right mode of mmap_lock. It's used > > > to ensure that while a non-cooperative operations is happening, if the > > > user has asked it to be notified, then no other userfaultfd operations > > > should take place until the user gets the event notification. > > > > I think it is needed, mmap_changing is read before the mmap_lock is > > taken, then compared after the mmap_lock is taken (both read mode) to > > ensure nothing has changed. > > mmap_changing is required to ensure that no uffdio operation runs in > parallel with operations that modify the memory map, like fork, mremap, > munmap and some of madvise calls. > And we do need the writers to block if there is an uffdio operation going > on, so I think an rwsem is the right way to protect mmap_chaniging. > > > > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > > > return true; > > > > > > > > > > userfaultfd_ctx_get(ctx); > > > > > + down_write(&ctx->map_changing_lock); > > > > > atomic_inc(&ctx->mmap_changing); > > > > > + up_write(&ctx->map_changing_lock); > > > > > mmap_read_unlock(mm); > > > > > > > > > > msg_init(&ewq.msg); > > > > If this happens in read mode, then why are you waiting for the readers > > to leave? Can't you just increment the atomic? It's fine happening in > > read mode today, so it should be fine with this new rwsem. > > It's been a while and the details are blurred now, but if I remember > correctly, having this in read mode forced non-cooperative uffd monitor to > be single threaded. If a monitor runs, say uffdio_copy, and in parallel a > thread in the monitored process does MADV_DONTNEED, the latter will wait > for userfaultfd_remove notification to be processed in the monitor and drop > the VMA contents only afterwards. If a non-cooperative monitor would > process notification in parallel with uffdio ops, MADV_DONTNEED could > continue and race with uffdio_copy, so read mode wouldn't be enough. > Right now this function won't stop to wait for readers to exit the critical section, but with this change there will be a pause (since the down_write() will need to wait for the readers with the read lock). So this is adding a delay in this call path that isn't necessary (?) nor existed before. If you have non-cooperative uffd monitors, then you will have to wait for them to finish to mark the uffd as being removed, where as before it was a fire & forget, this is now a wait to tell. > There was no much sense to make MADV_DONTNEED take mmap_lock in write mode > just for this, but now taking the rwsem in write mode here sounds > reasonable. > I see why there was no need for a mmap_lock in write mode, but I think taking the new rwsem in write mode is unnecessary. Basically, I see this as a signal to new readers to abort, but we don't need to wait for current readers to finish before this one increments the atomic. Unless I missed something, I don't think you want to take the write lock here. Thanks, Liam
On Tue, Jan 30, 2024 at 9:28 AM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > * Mike Rapoport <rppt@kernel.org> [240130 03:55]: > > On Mon, Jan 29, 2024 at 10:46:27PM -0500, Liam R. Howlett wrote: > > > * Lokesh Gidra <lokeshgidra@google.com> [240129 17:35]: > > > > On Mon, Jan 29, 2024 at 1:00 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > > > > > > > > > * Lokesh Gidra <lokeshgidra@google.com> [240129 14:35]: > > > > > > Increments and loads to mmap_changing are always in mmap_lock > > > > > > critical section. > > > > > > > > > > Read or write? > > > > > > > > > It's write-mode when incrementing (except in case of > > > > userfaultfd_remove() where it's done in read-mode) and loads are in > > > > mmap_lock (read-mode). I'll clarify this in the next version. > > > > > > > > > > > This ensures that if userspace requests event > > > > > > notification for non-cooperative operations (e.g. mremap), userfaultfd > > > > > > operations don't occur concurrently. > > > > > > > > > > > > This can be achieved by using a separate read-write semaphore in > > > > > > userfaultfd_ctx such that increments are done in write-mode and loads > > > > > > in read-mode, thereby eliminating the dependency on mmap_lock for this > > > > > > purpose. > > > > > > > > > > > > This is a preparatory step before we replace mmap_lock usage with > > > > > > per-vma locks in fill/move ioctls. > > > > > > > > > > > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > > > > > > --- > > > > > > fs/userfaultfd.c | 40 ++++++++++++---------- > > > > > > include/linux/userfaultfd_k.h | 31 ++++++++++-------- > > > > > > mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- > > > > > > 3 files changed, 75 insertions(+), 58 deletions(-) > > > > > > > > > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > > > > > index 58331b83d648..c00a021bcce4 100644 > > > > > > --- a/fs/userfaultfd.c > > > > > > +++ b/fs/userfaultfd.c > > > > > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > > > > > ctx->flags = octx->flags; > > > > > > ctx->features = octx->features; > > > > > > ctx->released = false; > > > > > > + init_rwsem(&ctx->map_changing_lock); > > > > > > atomic_set(&ctx->mmap_changing, 0); > > > > > > ctx->mm = vma->vm_mm; > > > > > > mmgrab(ctx->mm); > > > > > > > > > > > > userfaultfd_ctx_get(octx); > > > > > > + down_write(&octx->map_changing_lock); > > > > > > atomic_inc(&octx->mmap_changing); > > > > > > + up_write(&octx->map_changing_lock); > > > > > > On init, I don't think taking the lock is strictly necessary - unless > > > there is a way to access it before this increment? Not that it would > > > cost much. > > > > It's fork, the lock is for the context of the parent process and there > > could be uffdio ops running in parallel on its VM. > > Is this necessary then? We are getting the octx from another mm but the > mm is locked for forking. Why does it matter if there are readers of > the octx? > > I assume, currently, there is no way the userfaultfd ctx can > be altered under mmap_lock held for writing. I would think it matters if > there are writers (which, I presume are blocked by the mmap_lock for > now?) Shouldn't we hold the write lock for the entire dup process, I > mean, if we remove the userfaultfd from the mmap_lock, we cannot let the > structure being duplicated change half way through the dup process? > > I must be missing something with where this is headed? > AFAIU, the purpose of mmap_changing is to serialize uffdio operations with non-cooperative events if and when such events are being monitored by userspace (in case you missed, in all the cases of writes to mmap_changing, we only do it if that non-cooperative event has been requested by the user). As you pointed out there are no correctness concerns as far as userfaultfd operations are concerned. But these events are essential for the uffd monitor's functioning. For example: say the uffd monitor wants to be notified for REMAP operations while doing uffdio_copy operations. When COPY ioctls start failing with -EAGAIN and uffdio_copy.copy == 0, then it knows it must be due to mremap(), in which case it waits for the REMAP event notification before attempting COPY again. But there are few things that I didn't get after going through the history of non-cooperative events. Hopefully Mike (or someone else familiar) can clarify: IIUC, the idea behind non-cooperative events was to block uffdio operations from happening *before* the page tables are manipulated by the event (like mremap), and that the uffdio ops are resumed after the event notification is received by the monitor. If so then: 1) Why in the case of REMAP prep() is done after page-tables are moved? Shouldn't it be done before? All other non-cooperative operations do the prep() before. 2) UFFD_FEATURE_EVENT_REMOVE only notifies user space. It is not consistently blocking uffdio operations (as both sides are acquiring mmap_lock in read-mode) when remove operation is taking place. I can understand this was intentionally left as is in the interest of not acquiring mmap_lock in write-mode during madvise. But is only getting the notification any useful? Can we say this patch fixes it? And in that case shouldn't I split userfaultfd_remove() into two functions (like other non-cooperative operations)? 3) Based on [1] I see how mmap_changing helps in eliminating duplicate work (background copy) by uffd monitor, but didn't get if there is a correctness aspect too that I'm missing? I concur with Amit's point in [1] that getting -EEXIST when setting up the pte will avoid memory corruption, no? [1] https://lore.kernel.org/lkml/20201206093703.GY123287@linux.ibm.com/ > > > > > > > You could use the first bit of the atomic_inc as indication of a write. > > > > > So if the mmap_changing is even, then there are no writers. If it > > > > > didn't change and it's even then you know no modification has happened > > > > > (or it overflowed and hit the same number which would be rare, but > > > > > maybe okay?). > > > > > > > > This is already achievable, right? If mmap_changing is >0 then we know > > > > there are writers. The problem is that we want writers (like mremap > > > > operations) to block as long as there is a userfaultfd operation (also > > > > reader of mmap_changing) going on. Please note that I'm inferring this > > > > from current implementation. > > > > > > > > AFAIU, mmap_changing isn't required for correctness, because all > > > > operations are happening under the right mode of mmap_lock. It's used > > > > to ensure that while a non-cooperative operations is happening, if the > > > > user has asked it to be notified, then no other userfaultfd operations > > > > should take place until the user gets the event notification. > > > > > > I think it is needed, mmap_changing is read before the mmap_lock is > > > taken, then compared after the mmap_lock is taken (both read mode) to > > > ensure nothing has changed. > > > > mmap_changing is required to ensure that no uffdio operation runs in > > parallel with operations that modify the memory map, like fork, mremap, > > munmap and some of madvise calls. > > And we do need the writers to block if there is an uffdio operation going > > on, so I think an rwsem is the right way to protect mmap_chaniging. > > > > > > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > > > > return true; > > > > > > > > > > > > userfaultfd_ctx_get(ctx); > > > > > > + down_write(&ctx->map_changing_lock); > > > > > > atomic_inc(&ctx->mmap_changing); > > > > > > + up_write(&ctx->map_changing_lock); > > > > > > mmap_read_unlock(mm); > > > > > > > > > > > > msg_init(&ewq.msg); > > > > > > If this happens in read mode, then why are you waiting for the readers > > > to leave? Can't you just increment the atomic? It's fine happening in > > > read mode today, so it should be fine with this new rwsem. > > > > It's been a while and the details are blurred now, but if I remember > > correctly, having this in read mode forced non-cooperative uffd monitor to > > be single threaded. If a monitor runs, say uffdio_copy, and in parallel a > > thread in the monitored process does MADV_DONTNEED, the latter will wait > > for userfaultfd_remove notification to be processed in the monitor and drop > > the VMA contents only afterwards. If a non-cooperative monitor would > > process notification in parallel with uffdio ops, MADV_DONTNEED could > > continue and race with uffdio_copy, so read mode wouldn't be enough. > > > > Right now this function won't stop to wait for readers to exit the > critical section, but with this change there will be a pause (since the > down_write() will need to wait for the readers with the read lock). So > this is adding a delay in this call path that isn't necessary (?) nor > existed before. If you have non-cooperative uffd monitors, then you > will have to wait for them to finish to mark the uffd as being removed, > where as before it was a fire & forget, this is now a wait to tell. > I think a lot will be clearer once we get a response to my questions above. IMHO not only this write-lock is needed here, we need to fix userfaultfd_remove() by splitting it into userfaultfd_remove_prep() and userfaultfd_remove_complete() (like all other non-cooperative operations) as well. This patch enables us to do that as we remove mmap_changing's dependency on mmap_lock for synchronization. > > > There was no much sense to make MADV_DONTNEED take mmap_lock in write mode > > just for this, but now taking the rwsem in write mode here sounds > > reasonable. > > > > I see why there was no need for a mmap_lock in write mode, but I think > taking the new rwsem in write mode is unnecessary. > > Basically, I see this as a signal to new readers to abort, but we don't > need to wait for current readers to finish before this one increments > the atomic. > > Unless I missed something, I don't think you want to take the write lock > here. What I understood from the history of mmap_changing is that the intention was to enable informing the uffd monitor about the correct state of which pages are filled and which aren't. Going through this thread was very helpful [2] [2] https://lore.kernel.org/lkml/1527061324-19949-1-git-send-email-rppt@linux.vnet.ibm.com/ > > Thanks, > Liam
On Tue, Jan 30, 2024 at 06:24:24PM -0800, Lokesh Gidra wrote: > On Tue, Jan 30, 2024 at 9:28 AM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > > > * Mike Rapoport <rppt@kernel.org> [240130 03:55]: > > > On Mon, Jan 29, 2024 at 10:46:27PM -0500, Liam R. Howlett wrote: > > > > * Lokesh Gidra <lokeshgidra@google.com> [240129 17:35]: > > > > > > > > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > > > > > > index 58331b83d648..c00a021bcce4 100644 > > > > > > > --- a/fs/userfaultfd.c > > > > > > > +++ b/fs/userfaultfd.c > > > > > > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > > > > > > ctx->flags = octx->flags; > > > > > > > ctx->features = octx->features; > > > > > > > ctx->released = false; > > > > > > > + init_rwsem(&ctx->map_changing_lock); > > > > > > > atomic_set(&ctx->mmap_changing, 0); > > > > > > > ctx->mm = vma->vm_mm; > > > > > > > mmgrab(ctx->mm); > > > > > > > > > > > > > > userfaultfd_ctx_get(octx); > > > > > > > + down_write(&octx->map_changing_lock); > > > > > > > atomic_inc(&octx->mmap_changing); > > > > > > > + up_write(&octx->map_changing_lock); > > > > > > > > On init, I don't think taking the lock is strictly necessary - unless > > > > there is a way to access it before this increment? Not that it would > > > > cost much. > > > > > > It's fork, the lock is for the context of the parent process and there > > > could be uffdio ops running in parallel on its VM. > > > > Is this necessary then? We are getting the octx from another mm but the > > mm is locked for forking. Why does it matter if there are readers of > > the octx? > > > > I assume, currently, there is no way the userfaultfd ctx can > > be altered under mmap_lock held for writing. I would think it matters if > > there are writers (which, I presume are blocked by the mmap_lock for > > now?) Shouldn't we hold the write lock for the entire dup process, I > > mean, if we remove the userfaultfd from the mmap_lock, we cannot let the > > structure being duplicated change half way through the dup process? > > > > I must be missing something with where this is headed? > > > AFAIU, the purpose of mmap_changing is to serialize uffdio operations > with non-cooperative events if and when such events are being > monitored by userspace (in case you missed, in all the cases of writes > to mmap_changing, we only do it if that non-cooperative event has been > requested by the user). As you pointed out there are no correctness > concerns as far as userfaultfd operations are concerned. But these > events are essential for the uffd monitor's functioning. > > For example: say the uffd monitor wants to be notified for REMAP > operations while doing uffdio_copy operations. When COPY ioctls start > failing with -EAGAIN and uffdio_copy.copy == 0, then it knows it must > be due to mremap(), in which case it waits for the REMAP event > notification before attempting COPY again. > > But there are few things that I didn't get after going through the > history of non-cooperative events. Hopefully Mike (or someone else > familiar) can clarify: > > IIUC, the idea behind non-cooperative events was to block uffdio > operations from happening *before* the page tables are manipulated by > the event (like mremap), and that the uffdio ops are resumed after the > event notification is received by the monitor. The idea was to give userspace some way to serialize processing of non-cooperative event notifications and uffdio operations running in parallel. It's not necessary to block uffdio operations from happening before changes to the memory map, but with the mmap_lock synchronization that already was there adding mmap_chaning that will prevent uffdio operations when mmap_lock is taken for write was the simplest thing to do. When CRIU does post-copy restore of a process, its uffd monitor reacts to page fault and non-cooperative notifications and also performs a background copy of the memory contents from the saved state to the address space of the process being restored. Since non-cooperative events may happen completely independent from the uffd monitor, there are cases when the uffd monitor couldn't identify the order of events, like e.g. what won the race on mmap_lock, the process thread doing fork or the uffd monitor's uffdio_copy. In the fork vs uffdio_copy example, without mmap_changing, if the uffdio_copy takes the mmap_lock first, the new page will be present in the parent by the time copy_page_range() is called and the page will appear in the child's memory mappings by the time uffd monitor gets notification about the fork event. However, if the fork() is the first to take the mmap_lock, the new page will appear in the parent address space after copy_page_range() and it won't be mapped in the child's address space. With mmap_changing and current locking with mmap_lock, we have a guarantee that uffdio_copy will bail out if fork already took mmap_lock and the monitor can act appropriately. > 1) Why in the case of REMAP prep() is done after page-tables are > moved? Shouldn't it be done before? All other non-cooperative > operations do the prep() before. mremap_userfaultfd_prep() is done after page tables are moved because it initializes uffd context on the new_vma and if the actual remap fails, there's no point of doing it. Since mrpemap holds mmap_lock for write it does not matter if mmap_changed is updated before or after page tables are moved. In the time between mmap_lock is released and the UFFD_EVENT_REMAP is delivered to the uffd monitor, mmap_chaging will remain >0 and uffdio operations will bail out. > 2) UFFD_FEATURE_EVENT_REMOVE only notifies user space. It is not > consistently blocking uffdio operations (as both sides are acquiring > mmap_lock in read-mode) when remove operation is taking place. I can > understand this was intentionally left as is in the interest of not > acquiring mmap_lock in write-mode during madvise. But is only getting > the notification any useful? Can we say this patch fixes it? And in > that case shouldn't I split userfaultfd_remove() into two functions > (like other non-cooperative operations)? The notifications are useful because uffd monitor knows what memory should not be filled with uffdio_copy. Indeed there was no interest in taking mmap_lock for write in madvise, so there could be race between madvise and uffdio operations. This race essentially prevents uffd monitor from running the background copy in a separate thread, and with your change this should be possible. > 3) Based on [1] I see how mmap_changing helps in eliminating duplicate > work (background copy) by uffd monitor, but didn't get if there is a > correctness aspect too that I'm missing? I concur with Amit's point in > [1] that getting -EEXIST when setting up the pte will avoid memory > corruption, no? In the fork case without mmap_changing the child process may be get data or zeroes depending on the race for mmap_lock between the fork and uffdio_copy and -EEXIST is not enough for monitor to detect what was the ordering between fork and uffdio_copy. > > > > > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > > > > > return true; > > > > > > > > > > > > > > userfaultfd_ctx_get(ctx); > > > > > > > + down_write(&ctx->map_changing_lock); > > > > > > > atomic_inc(&ctx->mmap_changing); > > > > > > > + up_write(&ctx->map_changing_lock); > > > > > > > mmap_read_unlock(mm); > > > > > > > > > > > > > > msg_init(&ewq.msg); > > > > > > > > If this happens in read mode, then why are you waiting for the readers > > > > to leave? Can't you just increment the atomic? It's fine happening in > > > > read mode today, so it should be fine with this new rwsem. > > > > > > It's been a while and the details are blurred now, but if I remember > > > correctly, having this in read mode forced non-cooperative uffd monitor to > > > be single threaded. If a monitor runs, say uffdio_copy, and in parallel a > > > thread in the monitored process does MADV_DONTNEED, the latter will wait > > > for userfaultfd_remove notification to be processed in the monitor and drop > > > the VMA contents only afterwards. If a non-cooperative monitor would > > > process notification in parallel with uffdio ops, MADV_DONTNEED could > > > continue and race with uffdio_copy, so read mode wouldn't be enough. > > > > > > > Right now this function won't stop to wait for readers to exit the > > critical section, but with this change there will be a pause (since the > > down_write() will need to wait for the readers with the read lock). So > > this is adding a delay in this call path that isn't necessary (?) nor > > existed before. If you have non-cooperative uffd monitors, then you > > will have to wait for them to finish to mark the uffd as being removed, > > where as before it was a fire & forget, this is now a wait to tell. > > > I think a lot will be clearer once we get a response to my questions > above. IMHO not only this write-lock is needed here, we need to fix > userfaultfd_remove() by splitting it into userfaultfd_remove_prep() > and userfaultfd_remove_complete() (like all other non-cooperative > operations) as well. This patch enables us to do that as we remove > mmap_changing's dependency on mmap_lock for synchronization. The write-lock is not a requirement here for correctness and I don't see why we would need userfaultfd_remove_prep(). As I've said earlier, having a write-lock here will let CRIU to run background copy in parallel with processing of uffd events, but I don't feel strongly about doing it. > > > There was no much sense to make MADV_DONTNEED take mmap_lock in write mode > > > just for this, but now taking the rwsem in write mode here sounds > > > reasonable. > > > > > > > I see why there was no need for a mmap_lock in write mode, but I think > > taking the new rwsem in write mode is unnecessary. > > > > Basically, I see this as a signal to new readers to abort, but we don't > > need to wait for current readers to finish before this one increments > > the atomic. > > > > Unless I missed something, I don't think you want to take the write lock > > here. > What I understood from the history of mmap_changing is that the > intention was to enable informing the uffd monitor about the correct > state of which pages are filled and which aren't. Going through this > thread was very helpful [2] > > [2] https://lore.kernel.org/lkml/1527061324-19949-1-git-send-email-rppt@linux.vnet.ibm.com/
On Sun, Feb 4, 2024 at 2:27 AM Mike Rapoport <rppt@kernel.org> wrote: > > On Tue, Jan 30, 2024 at 06:24:24PM -0800, Lokesh Gidra wrote: > > On Tue, Jan 30, 2024 at 9:28 AM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > > > > > * Mike Rapoport <rppt@kernel.org> [240130 03:55]: > > > > On Mon, Jan 29, 2024 at 10:46:27PM -0500, Liam R. Howlett wrote: > > > > > * Lokesh Gidra <lokeshgidra@google.com> [240129 17:35]: > > > > > > > > > > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > > > > > > > index 58331b83d648..c00a021bcce4 100644 > > > > > > > > --- a/fs/userfaultfd.c > > > > > > > > +++ b/fs/userfaultfd.c > > > > > > > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) > > > > > > > > ctx->flags = octx->flags; > > > > > > > > ctx->features = octx->features; > > > > > > > > ctx->released = false; > > > > > > > > + init_rwsem(&ctx->map_changing_lock); > > > > > > > > atomic_set(&ctx->mmap_changing, 0); > > > > > > > > ctx->mm = vma->vm_mm; > > > > > > > > mmgrab(ctx->mm); > > > > > > > > > > > > > > > > userfaultfd_ctx_get(octx); > > > > > > > > + down_write(&octx->map_changing_lock); > > > > > > > > atomic_inc(&octx->mmap_changing); > > > > > > > > + up_write(&octx->map_changing_lock); > > > > > > > > > > On init, I don't think taking the lock is strictly necessary - unless > > > > > there is a way to access it before this increment? Not that it would > > > > > cost much. > > > > > > > > It's fork, the lock is for the context of the parent process and there > > > > could be uffdio ops running in parallel on its VM. > > > > > > Is this necessary then? We are getting the octx from another mm but the > > > mm is locked for forking. Why does it matter if there are readers of > > > the octx? > > > > > > I assume, currently, there is no way the userfaultfd ctx can > > > be altered under mmap_lock held for writing. I would think it matters if > > > there are writers (which, I presume are blocked by the mmap_lock for > > > now?) Shouldn't we hold the write lock for the entire dup process, I > > > mean, if we remove the userfaultfd from the mmap_lock, we cannot let the > > > structure being duplicated change half way through the dup process? > > > > > > I must be missing something with where this is headed? > > > > > AFAIU, the purpose of mmap_changing is to serialize uffdio operations > > with non-cooperative events if and when such events are being > > monitored by userspace (in case you missed, in all the cases of writes > > to mmap_changing, we only do it if that non-cooperative event has been > > requested by the user). As you pointed out there are no correctness > > concerns as far as userfaultfd operations are concerned. But these > > events are essential for the uffd monitor's functioning. > > > > For example: say the uffd monitor wants to be notified for REMAP > > operations while doing uffdio_copy operations. When COPY ioctls start > > failing with -EAGAIN and uffdio_copy.copy == 0, then it knows it must > > be due to mremap(), in which case it waits for the REMAP event > > notification before attempting COPY again. > > > > But there are few things that I didn't get after going through the > > history of non-cooperative events. Hopefully Mike (or someone else > > familiar) can clarify: > > > > IIUC, the idea behind non-cooperative events was to block uffdio > > operations from happening *before* the page tables are manipulated by > > the event (like mremap), and that the uffdio ops are resumed after the > > event notification is received by the monitor. > > The idea was to give userspace some way to serialize processing of > non-cooperative event notifications and uffdio operations running in > parallel. It's not necessary to block uffdio operations from happening > before changes to the memory map, but with the mmap_lock synchronization > that already was there adding mmap_chaning that will prevent uffdio > operations when mmap_lock is taken for write was the simplest thing to do. > > When CRIU does post-copy restore of a process, its uffd monitor reacts to > page fault and non-cooperative notifications and also performs a background > copy of the memory contents from the saved state to the address space of > the process being restored. > > Since non-cooperative events may happen completely independent from the > uffd monitor, there are cases when the uffd monitor couldn't identify the > order of events, like e.g. what won the race on mmap_lock, the process > thread doing fork or the uffd monitor's uffdio_copy. > > In the fork vs uffdio_copy example, without mmap_changing, if the > uffdio_copy takes the mmap_lock first, the new page will be present in the > parent by the time copy_page_range() is called and the page will appear in > the child's memory mappings by the time uffd monitor gets notification > about the fork event. However, if the fork() is the first to take the > mmap_lock, the new page will appear in the parent address space after > copy_page_range() and it won't be mapped in the child's address space. > > With mmap_changing and current locking with mmap_lock, we have a guarantee > that uffdio_copy will bail out if fork already took mmap_lock and the > monitor can act appropriately. > Thanks for the explanation. Really helpful! > > 1) Why in the case of REMAP prep() is done after page-tables are > > moved? Shouldn't it be done before? All other non-cooperative > > operations do the prep() before. > > mremap_userfaultfd_prep() is done after page tables are moved because it > initializes uffd context on the new_vma and if the actual remap fails, > there's no point of doing it. > Since mrpemap holds mmap_lock for write it does not matter if mmap_changed > is updated before or after page tables are moved. In the time between > mmap_lock is released and the UFFD_EVENT_REMAP is delivered to the uffd > monitor, mmap_chaging will remain >0 and uffdio operations will bail out. > Yes this makes sense. Even with per-vma locks, I see that the new_vma is write-locked (vma_start_write()) in vma_link() guaranteeing the same. > > 2) UFFD_FEATURE_EVENT_REMOVE only notifies user space. It is not > > consistently blocking uffdio operations (as both sides are acquiring > > mmap_lock in read-mode) when remove operation is taking place. I can > > understand this was intentionally left as is in the interest of not > > acquiring mmap_lock in write-mode during madvise. But is only getting > > the notification any useful? Can we say this patch fixes it? And in > > that case shouldn't I split userfaultfd_remove() into two functions > > (like other non-cooperative operations)? > > The notifications are useful because uffd monitor knows what memory should > not be filled with uffdio_copy. Indeed there was no interest in taking > mmap_lock for write in madvise, so there could be race between madvise and > uffdio operations. This race essentially prevents uffd monitor from running > the background copy in a separate thread, and with your change this should > be possible. > Makes sense. Thanks! > > 3) Based on [1] I see how mmap_changing helps in eliminating duplicate > > work (background copy) by uffd monitor, but didn't get if there is a > > correctness aspect too that I'm missing? I concur with Amit's point in > > [1] that getting -EEXIST when setting up the pte will avoid memory > > corruption, no? > > In the fork case without mmap_changing the child process may be get data or > zeroes depending on the race for mmap_lock between the fork and > uffdio_copy and -EEXIST is not enough for monitor to detect what was the > ordering between fork and uffdio_copy. This is extremely helpful. IIUC, there is a window after mmap_lock (write-mode) is released and before the uffd monitor thread is notified of fork. In that window, the monitor doesn't know that fork has already happened. So, without mmap_changing it would have done background copy only in the parent, thereby causing data inconsistency between parent and child processes. It seems to me that the correctness argument for mmap_changing is there in case of FORK event and REMAP when mremap is called with MREMAP_DONTUNMAP. In all other cases its only benefit is by avoiding unnecessary background copies, right? > > > > > > > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > > > > > > return true; > > > > > > > > > > > > > > > > userfaultfd_ctx_get(ctx); > > > > > > > > + down_write(&ctx->map_changing_lock); > > > > > > > > atomic_inc(&ctx->mmap_changing); > > > > > > > > + up_write(&ctx->map_changing_lock); > > > > > > > > mmap_read_unlock(mm); > > > > > > > > > > > > > > > > msg_init(&ewq.msg); > > > > > > > > > > If this happens in read mode, then why are you waiting for the readers > > > > > to leave? Can't you just increment the atomic? It's fine happening in > > > > > read mode today, so it should be fine with this new rwsem. > > > > > > > > It's been a while and the details are blurred now, but if I remember > > > > correctly, having this in read mode forced non-cooperative uffd monitor to > > > > be single threaded. If a monitor runs, say uffdio_copy, and in parallel a > > > > thread in the monitored process does MADV_DONTNEED, the latter will wait > > > > for userfaultfd_remove notification to be processed in the monitor and drop > > > > the VMA contents only afterwards. If a non-cooperative monitor would > > > > process notification in parallel with uffdio ops, MADV_DONTNEED could > > > > continue and race with uffdio_copy, so read mode wouldn't be enough. > > > > > > > > > > Right now this function won't stop to wait for readers to exit the > > > critical section, but with this change there will be a pause (since the > > > down_write() will need to wait for the readers with the read lock). So > > > this is adding a delay in this call path that isn't necessary (?) nor > > > existed before. If you have non-cooperative uffd monitors, then you > > > will have to wait for them to finish to mark the uffd as being removed, > > > where as before it was a fire & forget, this is now a wait to tell. > > > > > I think a lot will be clearer once we get a response to my questions > > above. IMHO not only this write-lock is needed here, we need to fix > > userfaultfd_remove() by splitting it into userfaultfd_remove_prep() > > and userfaultfd_remove_complete() (like all other non-cooperative > > operations) as well. This patch enables us to do that as we remove > > mmap_changing's dependency on mmap_lock for synchronization. > > The write-lock is not a requirement here for correctness and I don't see > why we would need userfaultfd_remove_prep(). > > As I've said earlier, having a write-lock here will let CRIU to run > background copy in parallel with processing of uffd events, but I don't > feel strongly about doing it. > Got it. Anyways, such a change needn't be part of this patch, so I'm going to keep it unchanged. > > > > There was no much sense to make MADV_DONTNEED take mmap_lock in write mode > > > > just for this, but now taking the rwsem in write mode here sounds > > > > reasonable. > > > > > > > > > > I see why there was no need for a mmap_lock in write mode, but I think > > > taking the new rwsem in write mode is unnecessary. > > > > > > Basically, I see this as a signal to new readers to abort, but we don't > > > need to wait for current readers to finish before this one increments > > > the atomic. > > > > > > Unless I missed something, I don't think you want to take the write lock > > > here. > > What I understood from the history of mmap_changing is that the > > intention was to enable informing the uffd monitor about the correct > > state of which pages are filled and which aren't. Going through this > > thread was very helpful [2] > > > > [2] https://lore.kernel.org/lkml/1527061324-19949-1-git-send-email-rppt@linux.vnet.ibm.com/ > > -- > Sincerely yours, > Mike.
On Mon, Feb 05, 2024 at 12:53:33PM -0800, Lokesh Gidra wrote: > On Sun, Feb 4, 2024 at 2:27 AM Mike Rapoport <rppt@kernel.org> wrote: > > > > > 3) Based on [1] I see how mmap_changing helps in eliminating duplicate > > > work (background copy) by uffd monitor, but didn't get if there is a > > > correctness aspect too that I'm missing? I concur with Amit's point in > > > [1] that getting -EEXIST when setting up the pte will avoid memory > > > corruption, no? > > > > In the fork case without mmap_changing the child process may be get data or > > zeroes depending on the race for mmap_lock between the fork and > > uffdio_copy and -EEXIST is not enough for monitor to detect what was the > > ordering between fork and uffdio_copy. > > This is extremely helpful. IIUC, there is a window after mmap_lock > (write-mode) is released and before the uffd monitor thread is > notified of fork. In that window, the monitor doesn't know that fork > has already happened. So, without mmap_changing it would have done > background copy only in the parent, thereby causing data inconsistency > between parent and child processes. Yes. > It seems to me that the correctness argument for mmap_changing is > there in case of FORK event and REMAP when mremap is called with > MREMAP_DONTUNMAP. In all other cases its only benefit is by avoiding > unnecessary background copies, right? Yes, I think you are right, but it's possible I've forgot some nasty race that will need mmap_changing for other events. > > > > > > > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > > > > > > > return true; > > > > > > > > > > > > > > > > > > userfaultfd_ctx_get(ctx); > > > > > > > > > + down_write(&ctx->map_changing_lock); > > > > > > > > > atomic_inc(&ctx->mmap_changing); > > > > > > > > > + up_write(&ctx->map_changing_lock); > > > > > > > > > mmap_read_unlock(mm); > > > > > > > > > > > > > > > > > > msg_init(&ewq.msg); > > > > > > > > > > > > If this happens in read mode, then why are you waiting for the readers > > > > > > to leave? Can't you just increment the atomic? It's fine happening in > > > > > > read mode today, so it should be fine with this new rwsem. > > > > > > > > > > It's been a while and the details are blurred now, but if I remember > > > > > correctly, having this in read mode forced non-cooperative uffd monitor to > > > > > be single threaded. If a monitor runs, say uffdio_copy, and in parallel a > > > > > thread in the monitored process does MADV_DONTNEED, the latter will wait > > > > > for userfaultfd_remove notification to be processed in the monitor and drop > > > > > the VMA contents only afterwards. If a non-cooperative monitor would > > > > > process notification in parallel with uffdio ops, MADV_DONTNEED could > > > > > continue and race with uffdio_copy, so read mode wouldn't be enough. > > > > > > > > > > > > > Right now this function won't stop to wait for readers to exit the > > > > critical section, but with this change there will be a pause (since the > > > > down_write() will need to wait for the readers with the read lock). So > > > > this is adding a delay in this call path that isn't necessary (?) nor > > > > existed before. If you have non-cooperative uffd monitors, then you > > > > will have to wait for them to finish to mark the uffd as being removed, > > > > where as before it was a fire & forget, this is now a wait to tell. > > > > > > > I think a lot will be clearer once we get a response to my questions > > > above. IMHO not only this write-lock is needed here, we need to fix > > > userfaultfd_remove() by splitting it into userfaultfd_remove_prep() > > > and userfaultfd_remove_complete() (like all other non-cooperative > > > operations) as well. This patch enables us to do that as we remove > > > mmap_changing's dependency on mmap_lock for synchronization. > > > > The write-lock is not a requirement here for correctness and I don't see > > why we would need userfaultfd_remove_prep(). > > > > As I've said earlier, having a write-lock here will let CRIU to run > > background copy in parallel with processing of uffd events, but I don't > > feel strongly about doing it. > > > Got it. Anyways, such a change needn't be part of this patch, so I'm > going to keep it unchanged. You mean with a read lock?
On Wed, Feb 7, 2024 at 7:27 AM Mike Rapoport <rppt@kernel.org> wrote: > > On Mon, Feb 05, 2024 at 12:53:33PM -0800, Lokesh Gidra wrote: > > On Sun, Feb 4, 2024 at 2:27 AM Mike Rapoport <rppt@kernel.org> wrote: > > > > > > > 3) Based on [1] I see how mmap_changing helps in eliminating duplicate > > > > work (background copy) by uffd monitor, but didn't get if there is a > > > > correctness aspect too that I'm missing? I concur with Amit's point in > > > > [1] that getting -EEXIST when setting up the pte will avoid memory > > > > corruption, no? > > > > > > In the fork case without mmap_changing the child process may be get data or > > > zeroes depending on the race for mmap_lock between the fork and > > > uffdio_copy and -EEXIST is not enough for monitor to detect what was the > > > ordering between fork and uffdio_copy. > > > > This is extremely helpful. IIUC, there is a window after mmap_lock > > (write-mode) is released and before the uffd monitor thread is > > notified of fork. In that window, the monitor doesn't know that fork > > has already happened. So, without mmap_changing it would have done > > background copy only in the parent, thereby causing data inconsistency > > between parent and child processes. > > Yes. > > > It seems to me that the correctness argument for mmap_changing is > > there in case of FORK event and REMAP when mremap is called with > > MREMAP_DONTUNMAP. In all other cases its only benefit is by avoiding > > unnecessary background copies, right? > > Yes, I think you are right, but it's possible I've forgot some nasty race > that will need mmap_changing for other events. > > > > > > > > > > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, > > > > > > > > > > return true; > > > > > > > > > > > > > > > > > > > > userfaultfd_ctx_get(ctx); > > > > > > > > > > + down_write(&ctx->map_changing_lock); > > > > > > > > > > atomic_inc(&ctx->mmap_changing); > > > > > > > > > > + up_write(&ctx->map_changing_lock); > > > > > > > > > > mmap_read_unlock(mm); > > > > > > > > > > > > > > > > > > > > msg_init(&ewq.msg); > > > > > > > > > > > > > > If this happens in read mode, then why are you waiting for the readers > > > > > > > to leave? Can't you just increment the atomic? It's fine happening in > > > > > > > read mode today, so it should be fine with this new rwsem. > > > > > > > > > > > > It's been a while and the details are blurred now, but if I remember > > > > > > correctly, having this in read mode forced non-cooperative uffd monitor to > > > > > > be single threaded. If a monitor runs, say uffdio_copy, and in parallel a > > > > > > thread in the monitored process does MADV_DONTNEED, the latter will wait > > > > > > for userfaultfd_remove notification to be processed in the monitor and drop > > > > > > the VMA contents only afterwards. If a non-cooperative monitor would > > > > > > process notification in parallel with uffdio ops, MADV_DONTNEED could > > > > > > continue and race with uffdio_copy, so read mode wouldn't be enough. > > > > > > > > > > > > > > > > Right now this function won't stop to wait for readers to exit the > > > > > critical section, but with this change there will be a pause (since the > > > > > down_write() will need to wait for the readers with the read lock). So > > > > > this is adding a delay in this call path that isn't necessary (?) nor > > > > > existed before. If you have non-cooperative uffd monitors, then you > > > > > will have to wait for them to finish to mark the uffd as being removed, > > > > > where as before it was a fire & forget, this is now a wait to tell. > > > > > > > > > I think a lot will be clearer once we get a response to my questions > > > > above. IMHO not only this write-lock is needed here, we need to fix > > > > userfaultfd_remove() by splitting it into userfaultfd_remove_prep() > > > > and userfaultfd_remove_complete() (like all other non-cooperative > > > > operations) as well. This patch enables us to do that as we remove > > > > mmap_changing's dependency on mmap_lock for synchronization. > > > > > > The write-lock is not a requirement here for correctness and I don't see > > > why we would need userfaultfd_remove_prep(). > > > > > > As I've said earlier, having a write-lock here will let CRIU to run > > > background copy in parallel with processing of uffd events, but I don't > > > feel strongly about doing it. > > > > > Got it. Anyways, such a change needn't be part of this patch, so I'm > > going to keep it unchanged. > > You mean with a read lock? No, I think write lock is good as it enables parallel background copy. Also because it brings consistency in blocking userfaultfd operations. I meant encapsulating remove operations within userfaultfd_remove_prep() and userfaultfd_remove_complete(). I couldn't figure out any need for that. > > -- > Sincerely yours, > Mike.
On Wed, Feb 07, 2024 at 12:24:52PM -0800, Lokesh Gidra wrote: > On Wed, Feb 7, 2024 at 7:27 AM Mike Rapoport <rppt@kernel.org> wrote: > > > > > > The write-lock is not a requirement here for correctness and I don't see > > > > why we would need userfaultfd_remove_prep(). > > > > > > > > As I've said earlier, having a write-lock here will let CRIU to run > > > > background copy in parallel with processing of uffd events, but I don't > > > > feel strongly about doing it. > > > > > > > Got it. Anyways, such a change needn't be part of this patch, so I'm > > > going to keep it unchanged. > > > > You mean with a read lock? > > No, I think write lock is good as it enables parallel background copy. > Also because it brings consistency in blocking userfaultfd operations. > > I meant encapsulating remove operations within > userfaultfd_remove_prep() and userfaultfd_remove_complete(). I > couldn't figure out any need for that. I don't think there is a need for that. With fork/mremap prep is required to ensure there's uffd context for new vmas.
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 58331b83d648..c00a021bcce4 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) ctx->flags = octx->flags; ctx->features = octx->features; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = vma->vm_mm; mmgrab(ctx->mm); userfaultfd_ctx_get(octx); + down_write(&octx->map_changing_lock); atomic_inc(&octx->mmap_changing); + up_write(&octx->map_changing_lock); fctx->orig = octx; fctx->new = ctx; list_add_tail(&fctx->list, fcs); @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx = ctx; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); } else { /* Drop uffd context if remap feature not enabled */ vma_start_write(vma); @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, return true; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); mmap_read_unlock(mm); msg_init(&ewq.msg); @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, return -ENOMEM; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); unmap_ctx->ctx = ctx; unmap_ctx->start = start; unmap_ctx->end = end; @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, - uffdio_copy.len, &ctx->mmap_changing, - flags); + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, + uffdio_copy.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, - uffdio_zeropage.range.len, - &ctx->mmap_changing); + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, + uffdio_zeropage.range.len); mmput(ctx->mm); } else { return -ESRCH; @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(ctx->mm)) { - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, - uffdio_wp.range.len, mode_wp, - &ctx->mmap_changing); + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, + uffdio_wp.range.len, mode_wp); mmput(ctx->mm); } else { return -ESRCH; @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, - uffdio_continue.range.len, - &ctx->mmap_changing, flags); + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, + uffdio_continue.range.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, - uffdio_poison.range.len, - &ctx->mmap_changing, 0); + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, + uffdio_poison.range.len, 0); mmput(ctx->mm); } else { return -ESRCH; @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, if (mmget_not_zero(mm)) { mmap_read_lock(mm); - /* Re-check after taking mmap_lock */ + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); if (likely(!atomic_read(&ctx->mmap_changing))) ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, uffdio_move.len, uffdio_move.mode); else ret = -EAGAIN; - + up_read(&ctx->map_changing_lock); mmap_read_unlock(mm); mmput(mm); } else { @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) ctx->flags = flags; ctx->features = 0; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = current->mm; /* prevent the mm struct to be freed */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 691d928ee864..3210c3552976 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -69,6 +69,13 @@ struct userfaultfd_ctx { unsigned int features; /* released */ bool released; + /* + * Prevents userfaultfd operations (fill/move/wp) from happening while + * some non-cooperative event(s) is taking place. Increments are done + * in write-mode. Whereas, userfaultfd operations, which includes + * reading mmap_changing, is done under read-mode. + */ + struct rw_semaphore map_changing_lock; /* memory mappings are changing because of non-cooperative event */ atomic_t mmap_changing; /* mm with one ore more vmas attached to this userfaultfd_ctx */ @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, unsigned long dst_addr, struct page *page, bool newly_allocated, uffd_flags_t flags); -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, + uffd_flags_t flags); +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, unsigned long dst_start, - unsigned long len, - atomic_t *mmap_changing); -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern int mwriteprotect_range(struct mm_struct *dst_mm, - unsigned long start, unsigned long len, - bool enable_wp, atomic_t *mmap_changing); + unsigned long len); +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long len, uffd_flags_t flags); +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags); +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp); extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e3a91871462a..6e2ca04ab04d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * called with mmap_lock held, it will release mmap_lock before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( + struct userfaultfd_ctx *ctx, struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { struct mm_struct *dst_mm = dst_vma->vm_mm; @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * feature is not supported. */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return -EINVAL; } @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( cond_resched(); if (unlikely(err == -ENOENT)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out; } mmap_read_lock(dst_mm); + down_read(&ctx->map_changing_lock); /* * If memory mappings are changing because of non-cooperative * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ - if (mmap_changing && atomic_read(mmap_changing)) { + if (atomic_read(ctx->mmap_changing)) { err = -EAGAIN; break; } @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, + struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); #endif /* CONFIG_HUGETLB_PAGE */ @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, return err; } -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { + struct mm_struct *dst_mm = ctx->mm; struct vm_area_struct *dst_vma; ssize_t err; pmd_t *dst_pmd; @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; /* @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, - len, mmap_changing, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, + src_start, len, flags); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, if (unlikely(err == -ENOENT)) { void *kaddr; + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, return copied ? copied : err; } -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) + uffd_flags_t flags) { - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, + return mfill_atomic(ctx, dst_start, src_start, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); } -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing) +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, + unsigned long start, + unsigned long len) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); } -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); } -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); } @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, return ret; } -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp) { + struct mm_struct *dst_mm = ctx->mm; unsigned long end = start + len; unsigned long _start, _end; struct vm_area_struct *dst_vma; @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; err = -ENOENT; @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, err = 0; } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return err; }