From patchwork Mon Jan 29 19:35:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 193679 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp791975dyb; Mon, 29 Jan 2024 11:41:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IE58GL5YEBJzwpEtJrXghZgNyJX0EOxSGr5GiZJNXRSM+Xt279JeiqRBEfG9mSeGGfhShI8 X-Received: by 2002:a05:6a21:8cc5:b0:19c:b0f6:25c with SMTP id ta5-20020a056a218cc500b0019cb0f6025cmr2241326pzb.45.1706557312925; Mon, 29 Jan 2024 11:41:52 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706557312; cv=pass; d=google.com; s=arc-20160816; b=VJoCWctK6nf7wWiV/YJsuiJnSiUc+yEoHAG0soCIAvtDJ0qXDzT1rhCQ06oxYguTUb SSGJQvCA/WXvG5yRb9gOm2GYpcXGyL37kXpLS6TM1G6/KBTMVqPlD4oCsmCfSZG9gYxk WQm4VYAYRGeiaid+eD28EizZHlsXidLFT5FHvPaiF8vvbBBAUoNubVzfUvYJkG33pp29 vWYqGtAErsQq0JOZhOSWuTG//b9x0BEzOJd/JDLx/rRWnff2Hbjb3DwiUMCjJOTILFRg 91vNOZMtVpImvJrn1JfW84cSmwE1RzNb8GC2EL/EaFbpV3YUaSKFLvPZBrCI+alBC3DI Rlag== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=RxV5wBXLAd7sIIElP90tsYG05NDDw/o/tErBcn9qBHI=; fh=YpYg+AMOLH4GRgVOvnwFLenCYLn09uoxMCo5ibUfZzo=; b=WzH//O4YKsUGZp/XMb+7rEfxD9hONLErPlRST2loItTx6LlUODxoHQfUzeUf4odkCt PgA+8+pCdrEXj7FpByvV4Yln6Ies5e1RXdsujOrEt4FO0iR+iZpn+cIboMz2OuR0O28e 3oHILC+Ik6/0v64BJ3nD0/SZb+StA1GOEbIuzZXHATgXXQJsd/6wNhVtjS0C6dnDOQgg JIgUr+Be1flgFW2daj2++A/QG4z+Xc7qNGtq9u10zQkbdcjVjLu3Z/fjYC9eXg/od6Xo TAd40mer4bgn4D0Ff6Jh8kVX9iR5qORdAKTY834mQ7CS2VQMMMQgXnUhcROQD6M0udpQ w0Dg== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=DL1DtO5C; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43418-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43418-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id z2-20020a17090a6d0200b00290d27a0d8dsi8233569pjj.137.2024.01.29.11.41.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 11:41:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-43418-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=DL1DtO5C; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43418-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43418-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id D0012B26AB7 for ; Mon, 29 Jan 2024 19:36:14 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A6C8E6A008; Mon, 29 Jan 2024 19:35:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DL1DtO5C" Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A91C7604E for ; Mon, 29 Jan 2024 19:35:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556928; cv=none; b=lE4U5TxafO7q3+GwYpz89dN/dOvbzvzCFoj7RFpOK0IMNkl1HTiYZWPEIu1p8tT2XFCRzMOhVq6qEq0xTu4+om9C95sbpUzxQC/xi8jn1kqAUbIHJxcPjgTH9k1LV7Kss4Q19Qol3LF4glQwpujvfF/LCcxuaToYPOTdYSPi0Sc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556928; c=relaxed/simple; bh=OJgQ1qhLDabP6L30mWXu4b8oGn7mXavl1NGjOvY9bUw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=f+5ls7WMbVXEnNi2tratTCQab++HsuTzDEjU8vFpqKbA2Jllt/LHyxt/lnc5kz0yqtKpFLZHmF+a1HgjNaVODl/IQrNf9I14WABgofW/xwkOYjj+N7vcR4/A41yHldcrQk36D0gIFTzvbeKMHa7q+61vsmLrxeuobcasNGU82Xs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DL1DtO5C; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-d9a541b720aso5607089276.0 for ; Mon, 29 Jan 2024 11:35:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706556925; x=1707161725; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RxV5wBXLAd7sIIElP90tsYG05NDDw/o/tErBcn9qBHI=; b=DL1DtO5C+sqoconRImf9YTESx0JrJIX1W8P3iTJ9x0GUqaWsn4E/JoNceR7l4Mrhs6 UK2z2JPaW6mFs9PZcZ2IFNgwAFWLKBJMUIEvhS70yHiFabWLRB99YQqLo5HrLTUHL5l9 6Unwt8BmjSIfvL/iRqUo35AH2I+SH6ER+MIH8wPJH1gtJGWNe7isKWvMQy+0cPostTpb LVsHrqXboFKhaQ/7K9H2S4NDTOa67uDZt40l/Pjp/c47FPTALOICEMM72F01fagGGuOi hEtKMYD29gOActz05USiM3f63Cawi9tL/XmSx77AuXhASLhbnjH7gchFkejL6A1d5VlX 7g6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706556925; x=1707161725; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RxV5wBXLAd7sIIElP90tsYG05NDDw/o/tErBcn9qBHI=; b=GrstLsnjDQd8SGVlsR0CBCYPAtCvhbdSsqbdfu2LnhFiUnjOS0HB+8acA6VAufALvH HTFW5ppajYCPWVZGGH2EGuXWKfdGp2VNfSPClpWBVZs5oN+eV+CzYNirZ2G1MJ8TOeAm YchY6sSudv31p87p4nA/J2t62vMU2k/FjIlhXvLK6ZL+Xn0c8+bY4i8nSJ77fEwkVmSm lMqET4cRhGOSh7mWro2HhJcl0bXf8oHfDzlair9aN4PWogFl9kvDScN7PuEY5XTWbR4P S7ywU6nUrUvEh6ys6dJiwoZWwCmmsM1PzSwxxDxXd6I/tQnRW6mX1WyZQOcl+p6sJi7p QMkQ== X-Gm-Message-State: AOJu0YyuDeFAAELwhQXt4LFdlYQ/bxtHW3sWLgmODCzQyO1+KwBzUEYz Vs7oasFWqPtTamZ5TgRnw7WGXLvnmjrpd1BknVKziXkQynHFUaYNpdwYXA/deV+r7Y+kLjOVjqN YuwSOu/A+0T7DuDmTFqdCAw== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:9b1d:f1ee:f750:93f1]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:2611:b0:dc2:4ab7:3d89 with SMTP id dw17-20020a056902261100b00dc24ab73d89mr2460415ybb.1.1706556925300; Mon, 29 Jan 2024 11:35:25 -0800 (PST) Date: Mon, 29 Jan 2024 11:35:10 -0800 In-Reply-To: <20240129193512.123145-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240129193512.123145-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240129193512.123145-2-lokeshgidra@google.com> Subject: [PATCH v2 1/3] userfaultfd: move userfaultfd_ctx struct to header file From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789455040907717573 X-GMAIL-MSGID: 1789455040907717573 Moving the struct to userfaultfd_k.h to be accessible from mm/userfaultfd.c. There are no other changes in the struct. This is required to prepare for using per-vma locks in userfaultfd operations. Signed-off-by: Lokesh Gidra Reviewed-by: Mike Rapoport (IBM) --- fs/userfaultfd.c | 39 ----------------------------------- include/linux/userfaultfd_k.h | 39 +++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 39 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 05c8e8a05427..58331b83d648 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -50,45 +50,6 @@ static struct ctl_table vm_userfaultfd_table[] = { static struct kmem_cache *userfaultfd_ctx_cachep __ro_after_init; -/* - * Start with fault_pending_wqh and fault_wqh so they're more likely - * to be in the same cacheline. - * - * Locking order: - * fd_wqh.lock - * fault_pending_wqh.lock - * fault_wqh.lock - * event_wqh.lock - * - * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, - * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's - * also taken in IRQ context. - */ -struct userfaultfd_ctx { - /* waitqueue head for the pending (i.e. not read) userfaults */ - wait_queue_head_t fault_pending_wqh; - /* waitqueue head for the userfaults */ - wait_queue_head_t fault_wqh; - /* waitqueue head for the pseudo fd to wakeup poll/read */ - wait_queue_head_t fd_wqh; - /* waitqueue head for events */ - wait_queue_head_t event_wqh; - /* a refile sequence protected by fault_pending_wqh lock */ - seqcount_spinlock_t refile_seq; - /* pseudo fd refcounting */ - refcount_t refcount; - /* userfaultfd syscall flags */ - unsigned int flags; - /* features requested from the userspace */ - unsigned int features; - /* released */ - bool released; - /* memory mappings are changing because of non-cooperative event */ - atomic_t mmap_changing; - /* mm with one ore more vmas attached to this userfaultfd_ctx */ - struct mm_struct *mm; -}; - struct userfaultfd_fork_ctx { struct userfaultfd_ctx *orig; struct userfaultfd_ctx *new; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index e4056547fbe6..691d928ee864 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -36,6 +36,45 @@ #define UFFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK) #define UFFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS) +/* + * Start with fault_pending_wqh and fault_wqh so they're more likely + * to be in the same cacheline. + * + * Locking order: + * fd_wqh.lock + * fault_pending_wqh.lock + * fault_wqh.lock + * event_wqh.lock + * + * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, + * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's + * also taken in IRQ context. + */ +struct userfaultfd_ctx { + /* waitqueue head for the pending (i.e. not read) userfaults */ + wait_queue_head_t fault_pending_wqh; + /* waitqueue head for the userfaults */ + wait_queue_head_t fault_wqh; + /* waitqueue head for the pseudo fd to wakeup poll/read */ + wait_queue_head_t fd_wqh; + /* waitqueue head for events */ + wait_queue_head_t event_wqh; + /* a refile sequence protected by fault_pending_wqh lock */ + seqcount_spinlock_t refile_seq; + /* pseudo fd refcounting */ + refcount_t refcount; + /* userfaultfd syscall flags */ + unsigned int flags; + /* features requested from the userspace */ + unsigned int features; + /* released */ + bool released; + /* memory mappings are changing because of non-cooperative event */ + atomic_t mmap_changing; + /* mm with one ore more vmas attached to this userfaultfd_ctx */ + struct mm_struct *mm; +}; + extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason); /* A combined operation mode + behavior flags. */ From patchwork Mon Jan 29 19:35:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 193677 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp789768dyb; Mon, 29 Jan 2024 11:36:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IG2ya1AiGnAXn8yXyxi1dd69cFsXSzJ0aEkcW1twUNpKkVwLuDMa0aeFuFRZbC9extkJA+V X-Received: by 2002:a05:6a21:3407:b0:19e:2604:756e with SMTP id yn7-20020a056a21340700b0019e2604756emr464392pzb.38.1706556999518; Mon, 29 Jan 2024 11:36:39 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706556999; cv=pass; d=google.com; s=arc-20160816; b=Iwpv9NeaqddTWqkRVEgn8AcGvd9WW6TPosIX/1N70bSjM+CaRZa8cV/y0K0UTPLRE+ fB+h7SOD/NW9CaWjWlPaCWNpcjZ1ZA+ScTUkx/94BQWSkn06GlLd3ws5/m9Xo1wlNLkD wCVAew9Soqj6C0hLHMex8M5oZRSw2ZSloR9k3gNMKOosY6TEnj4psq7P5evdZOrd/0bg f9JKXmp1t1LDzAP1GuUp57v1ZdKqj6pvTuEPnvn6hFFrBpSO+iINqpczdI3Norl1Jxvd JRsPFhJll00NN9QQeTnCTRti7DFGai5tqPuGGL92UbuDO88MujzoE/CI/nefa6QLKKlG DVbw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=d58A86vZ58W9GlIkgHFHlo47r/EdM3AnN+I4lpZG9DM=; fh=YpYg+AMOLH4GRgVOvnwFLenCYLn09uoxMCo5ibUfZzo=; b=sNBhkRX3wq09cvQM3JV9hdXVVyic9oDCNa+Rlmc500Kif5qpNo/nFN7mZE7ynDkEdc JiZXu5i6ype2hK8osSqF+PH/t0ppWSe8f44bn93OQ2fxYShJoUNfCTc2cx3O5pxsMTqv 52dOLrKhL6h/tIDAD+gxT3dSMirsdQ/+Sizq8TjAhqsu/2xZhCSuDd0jE5ZJLUEzGpf8 7GNB9cuGdFwOJ4jLpi1Ed1YIuYGE8JR0qRnY22b5axkpJD6GwOzogyTsuhFxvILl1FuL CVsAkV6XRhJxaXbVl5DAtN9lXDbIDtRI/7/vdxNcz3EiWZrsDvIM1pgijD9b+/zy4w2K AGyw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=aHENsEAY; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id ld10-20020a056a004f8a00b006da04ab75basi6162375pfb.278.2024.01.29.11.36.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 11:36:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=aHENsEAY; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43419-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 4396828229D for ; Mon, 29 Jan 2024 19:36:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 31D37157054; Mon, 29 Jan 2024 19:35:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aHENsEAY" Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86BD5152E1B for ; Mon, 29 Jan 2024 19:35:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556934; cv=none; b=SmWWOMrH56MCDkZJd9DcDA24kUVoO/jVR12nN2+uD9UZNiz6wJPDeoJ/S6MSyTfHE28I2GtishmXP5T/lYlm12l/8E4vuRPE4KvAtJMYs1ZdSWmh4eN1MO5RcpyI2efpbjeu4KvFlapZVn5K81nb1iPypaOooP1Leke5d1M0peM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556934; c=relaxed/simple; bh=VlWgkrKHwm88S5G8qp6WyRk6uzfCPiutFp9GMzxJmgs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VEsYhvqvkzbL3Msr8fV7NQpctWZaDFj0Q8I0sfgV4P4QtwImTprQKcBohLljGelxtvrk7hqpmnwAyjTF9zPTPfwgNuOliOgxQAG/LDQdpM+SVEPJ/ys7GHJmwRQLQhS+6u7tEKQNhPuudHP8ssEWgsVq04R5kQnhpdqNyiO4Acw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aHENsEAY; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5ffa2bb4810so35436307b3.0 for ; Mon, 29 Jan 2024 11:35:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706556930; x=1707161730; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d58A86vZ58W9GlIkgHFHlo47r/EdM3AnN+I4lpZG9DM=; b=aHENsEAYX+KY2gXJzGvB7GJ/j0cPo6DQVuoJc/g2ia6clV5sNc3fPb956nHGHo2bFi F43g/cRg2jGOYfYqddbbdaxwt/ST/TEZYsd1TVxxTcNqoh86MXovk4zoGEZ7F2Q3IUFx JxSRU1lndH4pqFgZ8lxH1x1aR5/+IhUO4DLk6395/xVQ/ar8dBpEbmypAKUCRVqg13z3 Rz2Si6GeoGLgGY6k0j01E+hXHnST8DS3ZDdwUZpkDDuWT+jyX4myN2h18WyT7K6oVwwO ruPV1+vtnnfiUDGDZDyAENs4HbFLpgmUv99ZOlmcu1nhD1CMZBgIqR4+MR1rHewOTV2n BYTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706556930; x=1707161730; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d58A86vZ58W9GlIkgHFHlo47r/EdM3AnN+I4lpZG9DM=; b=Vk7LZeNmfQ55bcLMKNt+dAMYks98ruxw7zvEP1Ujmzp3/2GKLYB4ZOKke9qJJ3u86f OXJ0RfVW9uP7AK0Vr6g6gijLPIjNeWuPD4B9FfLDABM7mmmq8DvGkNey6TQL2k+5c50s nCYHotHHFJb/DVCji0uETgR4An2NrdKDeEMiqQywFWmz3ep6Vlx8HCNUyrdbNrYf32Dp BwLAk44ePz7ugQaDCmIDYrXBSRdfc1zfmryBGq/Rs/rztlG7/9k1Ae4ePZJFvLh5XT/+ D3x8Yf5nbc+gnQC/GP3R0UOmjPWrLKY0cFDd3WJaJFYxVHE+vGhXhB64WzdtS7zSSaft DJ5w== X-Gm-Message-State: AOJu0YxhR6WmDKY09ZSgJhNgxAxinT/S2qE6sFmJYheO6G+me3hXNfHc RbSCOh+2LiiC5Dxhu0zzCZTjUGicOLClH1gphK3JyrCkVtSm8ISPKpwNZ6TtLkqQPxRumlnoW7d 1xmqO1NgzYXys4QnSodyEZw== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:9b1d:f1ee:f750:93f1]) (user=lokeshgidra job=sendgmr) by 2002:a05:690c:f91:b0:5ff:80cb:23c7 with SMTP id df17-20020a05690c0f9100b005ff80cb23c7mr1505040ywb.1.1706556930561; Mon, 29 Jan 2024 11:35:30 -0800 (PST) Date: Mon, 29 Jan 2024 11:35:11 -0800 In-Reply-To: <20240129193512.123145-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240129193512.123145-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240129193512.123145-3-lokeshgidra@google.com> Subject: [PATCH v2 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789454712271053811 X-GMAIL-MSGID: 1789454712271053811 Increments and loads to mmap_changing are always in mmap_lock critical section. This ensures that if userspace requests event notification for non-cooperative operations (e.g. mremap), userfaultfd operations don't occur concurrently. This can be achieved by using a separate read-write semaphore in userfaultfd_ctx such that increments are done in write-mode and loads in read-mode, thereby eliminating the dependency on mmap_lock for this purpose. This is a preparatory step before we replace mmap_lock usage with per-vma locks in fill/move ioctls. Signed-off-by: Lokesh Gidra Reviewed-by: Mike Rapoport (IBM) --- fs/userfaultfd.c | 40 ++++++++++++---------- include/linux/userfaultfd_k.h | 31 ++++++++++-------- mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- 3 files changed, 75 insertions(+), 58 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 58331b83d648..c00a021bcce4 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) ctx->flags = octx->flags; ctx->features = octx->features; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = vma->vm_mm; mmgrab(ctx->mm); userfaultfd_ctx_get(octx); + down_write(&octx->map_changing_lock); atomic_inc(&octx->mmap_changing); + up_write(&octx->map_changing_lock); fctx->orig = octx; fctx->new = ctx; list_add_tail(&fctx->list, fcs); @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx = ctx; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); } else { /* Drop uffd context if remap feature not enabled */ vma_start_write(vma); @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, return true; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); mmap_read_unlock(mm); msg_init(&ewq.msg); @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, return -ENOMEM; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); unmap_ctx->ctx = ctx; unmap_ctx->start = start; unmap_ctx->end = end; @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, - uffdio_copy.len, &ctx->mmap_changing, - flags); + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, + uffdio_copy.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, - uffdio_zeropage.range.len, - &ctx->mmap_changing); + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, + uffdio_zeropage.range.len); mmput(ctx->mm); } else { return -ESRCH; @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(ctx->mm)) { - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, - uffdio_wp.range.len, mode_wp, - &ctx->mmap_changing); + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, + uffdio_wp.range.len, mode_wp); mmput(ctx->mm); } else { return -ESRCH; @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, - uffdio_continue.range.len, - &ctx->mmap_changing, flags); + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, + uffdio_continue.range.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, - uffdio_poison.range.len, - &ctx->mmap_changing, 0); + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, + uffdio_poison.range.len, 0); mmput(ctx->mm); } else { return -ESRCH; @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, if (mmget_not_zero(mm)) { mmap_read_lock(mm); - /* Re-check after taking mmap_lock */ + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); if (likely(!atomic_read(&ctx->mmap_changing))) ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, uffdio_move.len, uffdio_move.mode); else ret = -EAGAIN; - + up_read(&ctx->map_changing_lock); mmap_read_unlock(mm); mmput(mm); } else { @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) ctx->flags = flags; ctx->features = 0; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = current->mm; /* prevent the mm struct to be freed */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 691d928ee864..3210c3552976 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -69,6 +69,13 @@ struct userfaultfd_ctx { unsigned int features; /* released */ bool released; + /* + * Prevents userfaultfd operations (fill/move/wp) from happening while + * some non-cooperative event(s) is taking place. Increments are done + * in write-mode. Whereas, userfaultfd operations, which includes + * reading mmap_changing, is done under read-mode. + */ + struct rw_semaphore map_changing_lock; /* memory mappings are changing because of non-cooperative event */ atomic_t mmap_changing; /* mm with one ore more vmas attached to this userfaultfd_ctx */ @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, unsigned long dst_addr, struct page *page, bool newly_allocated, uffd_flags_t flags); -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, + uffd_flags_t flags); +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, unsigned long dst_start, - unsigned long len, - atomic_t *mmap_changing); -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern int mwriteprotect_range(struct mm_struct *dst_mm, - unsigned long start, unsigned long len, - bool enable_wp, atomic_t *mmap_changing); + unsigned long len); +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long len, uffd_flags_t flags); +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags); +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp); extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e3a91871462a..6e2ca04ab04d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * called with mmap_lock held, it will release mmap_lock before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( + struct userfaultfd_ctx *ctx, struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { struct mm_struct *dst_mm = dst_vma->vm_mm; @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * feature is not supported. */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return -EINVAL; } @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( cond_resched(); if (unlikely(err == -ENOENT)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out; } mmap_read_lock(dst_mm); + down_read(&ctx->map_changing_lock); /* * If memory mappings are changing because of non-cooperative * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ - if (mmap_changing && atomic_read(mmap_changing)) { + if (atomic_read(ctx->mmap_changing)) { err = -EAGAIN; break; } @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, + struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); #endif /* CONFIG_HUGETLB_PAGE */ @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, return err; } -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { + struct mm_struct *dst_mm = ctx->mm; struct vm_area_struct *dst_vma; ssize_t err; pmd_t *dst_pmd; @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; /* @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, - len, mmap_changing, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, + src_start, len, flags); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, if (unlikely(err == -ENOENT)) { void *kaddr; + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, return copied ? copied : err; } -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) + uffd_flags_t flags) { - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, + return mfill_atomic(ctx, dst_start, src_start, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); } -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing) +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, + unsigned long start, + unsigned long len) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); } -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); } -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); } @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, return ret; } -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp) { + struct mm_struct *dst_mm = ctx->mm; unsigned long end = start + len; unsigned long _start, _end; struct vm_area_struct *dst_vma; @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; err = -ENOENT; @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, err = 0; } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return err; } From patchwork Mon Jan 29 19:35:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 193680 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp792509dyb; Mon, 29 Jan 2024 11:43:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IGlyxMpOZ9M56WZC/u6P4pDeC3vDGPV97cf6+04Tzaihi0SAKu8UMy4kZ0ilHi3OAgF29tz X-Received: by 2002:a17:903:2615:b0:1d7:691e:2707 with SMTP id jd21-20020a170903261500b001d7691e2707mr3830830plb.27.1706557381084; Mon, 29 Jan 2024 11:43:01 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706557381; cv=pass; d=google.com; s=arc-20160816; b=kbfKqIKd4WdOotsUJ45dRVefrAiLdjxBurv6vsMBBCkN/1b+ao+r9vdDxt3hW0h5Xu ejkF9qDvonUO6hv4ZL2SUon0D7d0q3pNhuY6t5QJcxEtVEcbt3ShPBa+XM5u3DROnQRT wR+AoHRStbF2VlEVnZzDgk7v1/i5WsHAmcpjsUHMPbVMIFtoGQ5ob3oZqDBX0ArJWg7j GRr1xS2tpMu4Zcn0n651eUo8BWg2y9nMZKZjq41vxDLW8hKoUazxtW2N3oh5DqQ/IrOq GtGxubKjBXeI1ILN4nmL7PfaCi4pM5M+qfmgI1ydSfPYzfogs9jWzby9usHToFnQBZr5 j3IA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=/rTTdTkZrF1wexi3aiNZ7/0XWdKeqDeZCcom0yepX1o=; fh=YpYg+AMOLH4GRgVOvnwFLenCYLn09uoxMCo5ibUfZzo=; b=KxBjnxO7g9u5elX+MYjhO5n/FqIXUFlRp9dXByeQBaJHwU8GLPkoTCdP/TteNvkklv SHoRypOQM/RPTNcGdCINkd6Od8ZFtNm6oLPUn9ik5etb9YRNbiwQliSg4Qw4Fh4cXCIP rE6U/IoZhpz7nPOY7Wp1y98BjDP7g+V6cuKc5/t38dpfPQ3Fkzv5J0YOzt5PSY4QRxqh 4myP4UH0y9n5mdLiFJel7sTFYHIbqBU53VoaKaESeU4V8yGqU9Yw7aH6v6gjSb3OCulj IwMPkOzBooiJwaBpVODJWFxjdpFW7+rW5Ap992kLQy6/JTNngXndypTm+fWLGLzcWBh0 6VUw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=xKqYfZQq; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43420-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43420-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id ks2-20020a056a004b8200b006d99ed4ae9fsi6029978pfb.8.2024.01.29.11.43.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 11:43:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-43420-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=xKqYfZQq; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-43420-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-43420-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 7FE7DB25830 for ; Mon, 29 Jan 2024 19:36:55 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 497AA157E66; Mon, 29 Jan 2024 19:35:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xKqYfZQq" Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7369F1552F2 for ; Mon, 29 Jan 2024 19:35:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556940; cv=none; b=Tp1ACQxX0OJc73q/aym5B2fQ4nGxJ4HR26FaCIE2zGUpFbvRbzsJK6R6abmbCszLtapPwc24PepY0SFjDvbzCj6nLdJAbopOs211Pf1vz/dFT2ZEMk1hiNBWXyTaXwz1y3bfHST4BRatCVp8g2Vu29PcK1gUFZPizPOTaSH1+KA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706556940; c=relaxed/simple; bh=+HdbbaQZa8+A0lATLROmSrWayDBc8cSfyhmoQJXBCs8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=m3fch7pzYQ3w6TdmZcDR7TrFQqkmfDMn/kQstTUB00gB2h4fF+8C8AV5zvu2Iefqymjqppj/B5V7wKavKHQbWBMwX6PwnCFBpBRZcqRCd4FwRDTTPAxgZscsvWwvay33LGIa0cy4JLiYyRMepGxY7QQvyk4+Zo9chZnpS2Wyun8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xKqYfZQq; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc265e7a67cso5381174276.0 for ; Mon, 29 Jan 2024 11:35:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706556936; x=1707161736; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/rTTdTkZrF1wexi3aiNZ7/0XWdKeqDeZCcom0yepX1o=; b=xKqYfZQq1gwh+Hwvnyx8H8UOCQVCqg+uLZGLmvDznYQtjy6KAv1iQ4zydpQd/ucLUc iEt7a6rGibfEbm/wAldZJcHq0M1qF6XjN8frVZAxLVLUGX4a6t439GrENNxxrlxCNRCL 5+YHF+i34KUUmaWxtObdkPcjaBHIPQxJ2WKeDreKZ4XgVMrYSIggqOrVDNKlt7T9fL5W A0/MHsiPaktsqVoTCfDFKhUZVnbsEpqYSqkeU3Nxe6XfZWhVlli70K1a/qZbW4jw8cxh R2Rl1Sxc8YAV6uy5aI9HQVQhPjG1Vf72Vj/e2iqNZTEFKGUGgoCA3EGXQ1QeTSh4D+I1 z8tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706556936; x=1707161736; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/rTTdTkZrF1wexi3aiNZ7/0XWdKeqDeZCcom0yepX1o=; b=G1S6moeI8rwJSyAX7qsO0xE3Lu5tPtbjCVEOdkqDirfyXHfCB0cJcaTRDYPXDfwnqI 26e54iI/yg6JNOkpw+Ablg1wPcO6ACvXCfIFSQbbdeGPzDdV1F+KoEpLd5e9P30U+ywb Nmww2tk7affDBF3T68Mdqe/dog/SkO0fIUy6hVnxWU2oM6xvQhAYm1F0jB3t+7Jc7mSc OINeBaxYxitpuI3smXyiS286UxwBgDG41CG4elxSzce3QJ8PmYoDso1+GjB1XEOQufwq +YXH5ZMmX0YqhqRsRrkoMJYl5kywNlLncH4O1lAfADYdNp/YSQSRb3SBJesa7VVoo14b GAUw== X-Gm-Message-State: AOJu0YxqRIc8sxIbgezFVuycQF9H3ADcVoZ9UVwkAi2KdChQ1uGHU9/K fvvJQ3XljK7AMK0s4xNdlMvtUM5fy6CaafHyHPuPzviQg+b7ZgbpKapyq9M9+KVJQYDYBc4WIBD S5Q5wT7C4xtKKzAfluqJbJQ== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:9b1d:f1ee:f750:93f1]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:250b:b0:dbe:d0a9:2be3 with SMTP id dt11-20020a056902250b00b00dbed0a92be3mr2244340ybb.3.1706556936414; Mon, 29 Jan 2024 11:35:36 -0800 (PST) Date: Mon, 29 Jan 2024 11:35:12 -0800 In-Reply-To: <20240129193512.123145-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240129193512.123145-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240129193512.123145-4-lokeshgidra@google.com> Subject: [PATCH v2 3/3] userfaultfd: use per-vma locks in userfaultfd operations From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789455112284118517 X-GMAIL-MSGID: 1789455112284118517 All userfaultfd operations, except write-protect, opportunistically use per-vma locks to lock vmas. If we fail then fall back to locking mmap-lock in read-mode. Write-protect operation requires mmap_lock as it iterates over multiple vmas. Signed-off-by: Lokesh Gidra --- fs/userfaultfd.c | 13 +-- include/linux/userfaultfd_k.h | 5 +- mm/userfaultfd.c | 175 +++++++++++++++++++++++----------- 3 files changed, 122 insertions(+), 71 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index c00a021bcce4..60dcfafdc11a 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(mm)) { - mmap_read_lock(mm); - - /* Re-check after taking map_changing_lock */ - down_read(&ctx->map_changing_lock); - if (likely(!atomic_read(&ctx->mmap_changing))) - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, - uffdio_move.len, uffdio_move.mode); - else - ret = -EAGAIN; - up_read(&ctx->map_changing_lock); - mmap_read_unlock(mm); + ret = move_pages(ctx, uffdio_move.dst, uffdio_move.src, + uffdio_move.len, uffdio_move.mode); mmput(mm); } else { return -ESRCH; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 3210c3552976..05d59f74fc88 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, /* move_pages */ void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 flags); +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 flags); int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 6e2ca04ab04d..d55bf18b80db 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -19,20 +19,39 @@ #include #include "internal.h" -static __always_inline -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, - unsigned long dst_start, - unsigned long len) +void unpin_vma(struct mm_struct *mm, struct vm_area_struct *vma, bool *mmap_locked) +{ + BUG_ON(!vma && !*mmap_locked); + + if (*mmap_locked) { + mmap_read_unlock(mm); + *mmap_locked = false; + } else + vma_end_read(vma); +} + +/* + * Search for VMA and make sure it is stable either by locking it or taking + * mmap_lock. + */ +struct vm_area_struct *find_and_pin_dst_vma(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len, + bool *mmap_locked) { + struct vm_area_struct *dst_vma = lock_vma_under_rcu(dst_mm, dst_start); + if (!dst_vma) { + mmap_read_lock(dst_mm); + *mmap_locked = true; + dst_vma = find_vma(dst_mm, dst_start); + } + /* * Make sure that the dst range is both valid and fully within a * single existing vma. */ - struct vm_area_struct *dst_vma; - - dst_vma = find_vma(dst_mm, dst_start); if (!range_in_vma(dst_vma, dst_start, dst_start + len)) - return NULL; + goto unpin; /* * Check the vma is registered in uffd, this is required to @@ -40,9 +59,13 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, * time. */ if (!dst_vma->vm_userfaultfd_ctx.ctx) - return NULL; + goto unpin; return dst_vma; + +unpin: + unpin_vma(dst_mm, dst_vma, mmap_locked); + return NULL; } /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ @@ -350,7 +373,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) #ifdef CONFIG_HUGETLB_PAGE /* * mfill_atomic processing for HUGETLB vmas. Note that this routine is - * called with mmap_lock held, it will release mmap_lock before returning. + * called with either vma-lock or mmap_lock held, it will release the lock + * before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( struct userfaultfd_ctx *ctx, @@ -358,7 +382,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( unsigned long dst_start, unsigned long src_start, unsigned long len, - uffd_flags_t flags) + uffd_flags_t flags, + bool *mmap_locked) { struct mm_struct *dst_mm = dst_vma->vm_mm; int vm_shared = dst_vma->vm_flags & VM_SHARED; @@ -380,7 +405,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, mmap_locked); return -EINVAL; } @@ -404,12 +429,25 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (!dst_vma) { err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) - goto out_unlock; + dst_vma = find_and_pin_dst_vma(dst_mm, dst_start, + len, mmap_locked); + if (!dst_vma) + goto out; + if (!is_vm_hugetlb_page(dst_vma)) + goto out_unlock_vma; err = -EINVAL; if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) + goto out_unlock_vma; + + /* + * If memory mappings are changing because of non-cooperative + * operation (e.g. mremap) running in parallel, bail out and + * request the user to retry later + */ + down_read(&ctx->map_changing_lock); + err = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; vm_shared = dst_vma->vm_flags & VM_SHARED; @@ -465,7 +503,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( if (unlikely(err == -ENOENT)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, mmap_locked); BUG_ON(!folio); err = copy_folio_from_user(folio, @@ -474,17 +512,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( err = -EFAULT; goto out; } - mmap_read_lock(dst_mm); - down_read(&ctx->map_changing_lock); - /* - * If memory mappings are changing because of non-cooperative - * operation (e.g. mremap) running in parallel, bail out and - * request the user to retry later - */ - if (atomic_read(ctx->mmap_changing)) { - err = -EAGAIN; - break; - } dst_vma = NULL; goto retry; @@ -505,7 +532,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); +out_unlock_vma: + unpin_vma(dst_mm, dst_vma, mmap_locked); out: if (folio) folio_put(folio); @@ -521,7 +549,8 @@ extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - uffd_flags_t flags); + uffd_flags_t flags, + bool *mmap_locked); #endif /* CONFIG_HUGETLB_PAGE */ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, @@ -581,6 +610,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long src_addr, dst_addr; long copied; struct folio *folio; + bool mmap_locked = false; /* * Sanitize the command parameters: @@ -597,7 +627,14 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, copied = 0; folio = NULL; retry: - mmap_read_lock(dst_mm); + /* + * Make sure the vma is not shared, that the dst range is + * both valid and fully within a single existing vma. + */ + err = -ENOENT; + dst_vma = find_and_pin_dst_vma(dst_mm, dst_start, len, &mmap_locked); + if (!dst_vma) + goto out; /* * If memory mappings are changing because of non-cooperative @@ -609,15 +646,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, if (atomic_read(&ctx->mmap_changing)) goto out_unlock; - /* - * Make sure the vma is not shared, that the dst range is - * both valid and fully within a single existing vma. - */ - err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma) - goto out_unlock; - err = -EINVAL; /* * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but @@ -638,8 +666,8 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, - src_start, len, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, src_start + len, flags, &mmap_locked); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -699,7 +727,8 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, void *kaddr; up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, &mmap_locked); + BUG_ON(!folio); kaddr = kmap_local_folio(folio, 0); @@ -730,7 +759,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + unpin_vma(dst_mm, dst_vma, &mmap_locked); out: if (folio) folio_put(folio); @@ -1285,8 +1314,6 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, * @len: length of the virtual memory range * @mode: flags from uffdio_move.mode * - * Must be called with mmap_lock held for read. - * * move_pages() remaps arbitrary anonymous pages atomically in zero * copy. It only works on non shared anonymous pages because those can * be relocated without generating non linear anon_vmas in the rmap @@ -1353,15 +1380,16 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, * could be obtained. This is the only additional complexity added to * the rmap code to provide this anonymous page remapping functionality. */ -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 mode) +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 mode) { + struct mm_struct *mm = ctx->mm; struct vm_area_struct *src_vma, *dst_vma; unsigned long src_addr, dst_addr; pmd_t *src_pmd, *dst_pmd; long err = -EINVAL; ssize_t moved = 0; + bool mmap_locked = false; /* Sanitize the command parameters. */ if (WARN_ON_ONCE(src_start & ~PAGE_MASK) || @@ -1374,28 +1402,52 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, WARN_ON_ONCE(dst_start + len <= dst_start)) goto out; + dst_vma = NULL; + src_vma = lock_vma_under_rcu(mm, src_start); + if (src_vma) { + dst_vma = lock_vma_under_rcu(mm, dst_start); + if (!dst_vma) + vma_end_read(src_vma); + } + + /* If we failed to lock both VMAs, fall back to mmap_lock */ + if (!dst_vma) { + mmap_read_lock(mm); + mmap_locked = true; + src_vma = find_vma(mm, src_start); + if (!src_vma) + goto out_unlock_mmap; + dst_vma = find_vma(mm, dst_start); + if (!dst_vma) + goto out_unlock_mmap; + } + + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); + if (likely(atomic_read(&ctx->mmap_changing))) { + err = -EAGAIN; + goto out_unlock; + } /* * Make sure the vma is not shared, that the src and dst remap * ranges are both valid and fully within a single existing * vma. */ - src_vma = find_vma(mm, src_start); - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) - goto out; + if (src_vma->vm_flags & VM_SHARED) + goto out_unlock; if (src_start < src_vma->vm_start || src_start + len > src_vma->vm_end) - goto out; + goto out_unlock; - dst_vma = find_vma(mm, dst_start); - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) - goto out; + if (dst_vma->vm_flags & VM_SHARED) + goto out_unlock; if (dst_start < dst_vma->vm_start || dst_start + len > dst_vma->vm_end) - goto out; + goto out_unlock; err = validate_move_areas(ctx, src_vma, dst_vma); if (err) - goto out; + goto out_unlock; for (src_addr = src_start, dst_addr = dst_start; src_addr < src_start + len;) { @@ -1512,6 +1564,15 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, moved += step_size; } +out_unlock: + up_read(&ctx->map_changing_lock); +out_unlock_mmap: + if (mmap_locked) + mmap_read_unlock(mm); + else { + vma_end_read(dst_vma); + vma_end_read(src_vma); + } out: VM_WARN_ON(moved < 0); VM_WARN_ON(err > 0);