From patchwork Tue Feb 13 21:57:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 200707 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp843103dyb; Tue, 13 Feb 2024 14:28:35 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWAdnR41n7lxbC74zyvFtKwrvvavj3/RZ3Z9a0UTy2Q0H4AuV+CLRylbb3GRSgpISfYG48aEKBql+wZHSjgcyz5X3wjdQ== X-Google-Smtp-Source: AGHT+IHRLdVQX1tofxnjmpV6W0+hcx15Sz8l2rW87wuM+X8EGltBjGPAzpW46fcS8cs+1dVQLR8L X-Received: by 2002:a17:903:124b:b0:1db:28c0:6148 with SMTP id u11-20020a170903124b00b001db28c06148mr1229856plh.0.1707863315282; Tue, 13 Feb 2024 14:28:35 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707863315; cv=pass; d=google.com; s=arc-20160816; b=zAX7CwHzNb2ht1pS8Jd8Na/+dpJMYfSway5z6pMgEUccgxGsIPBDGNBqmOe2pmzDcm i4NFULFuHo7cq6egZTc+qvR6w2/pVFMLnEwL4aktBj4W+DiBKjyqap4gbFnT1rZJW4fw Yvdxy4Ugu6w28a5sjD7uBpmo1W6qrrD2sIOpq2ZJV6MyPRJgs10yoWdbGuUb82X1PaFl A9aCpAw5ONeAUoOFxLKsnIAysOgLLvjAESpg2W6d7HJAjc8qGu+JoKWtZY750Bz+SfAW knQ1We5gqICF/q8ahA076L3vA7Eg3rUsCCDDF9jgeWjDG0dB6pgIdCMcegx9ZPtiRsjq qE6w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=FeTp34JzZtDcGzw26Xy4JuyVdUXeLDl6wgD9/498E4U=; fh=IaMmx+TaWQ9ZHAFZZh+DUAuPMtRkWe7zwI70pKAV6+s=; b=gZjggOBcOzvBvICLbfFDWr7MCsNlCrkUvt4NaZQdYqX7AB9CSAehiLOudHiWr8TuKh EMkhoOHOVxbSpgYTFFQey2MrzgckPWko7Q/WOWab7B/IuOPU0C3RaMcxo7ABEQVRd9RU jC1I0nGwGmZHDiN1zvgU9vkivMSJ3FYaQC+PmK9XAOEaq+ENPR7WTmApHJvSepwmZuWY dla/ygZEK0fQcYgvzhuSeI+//G/taW02yqSYrjMKTOvb+ubCf/b8kBFaV3ypVpwBIogR yFnHKYnP0ObdJ298pN+/c0EBGwBqSS8YYYHTym3qLpsiuSLsQebsMEDwuyLQbaOqvq2J BuRA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=nh3cUfB1; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-64400-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64400-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCVgc4Iz3gLALVbj3jyYtC1UW7UgEh3OrFuyr91rA3kjuXXxZVElZup9eGh080Iu9bgegqVVeKCiGJtY4YN/f4MIDYM72w== Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id kd14-20020a17090313ce00b001db28ba9025si2709645plb.542.2024.02.13.14.28.34 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 14:28:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64400-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=nh3cUfB1; arc=pass (i=1 spf=pass spfdomain=flex--lokeshgidra.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-64400-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64400-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 2B6DFB2EC9C for ; Tue, 13 Feb 2024 22:01:53 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 193CA627F5; Tue, 13 Feb 2024 21:58:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nh3cUfB1" Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3FFB2629FE for ; Tue, 13 Feb 2024 21:57:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861478; cv=none; b=j8bHi6Vp+WP7YeMGxZ0rfyn0qWZCxHfeGR1GTX5rUy1Uo9aUCRXcnuhivkw+rQv/7OGlGFuBLQst4qWqHIZV70nGfq6NImAq/kdY1bIvWwviEDlOriXgCpnSVqjd1B4jG4F6igPXddEzz5USVxDcpjHTFPokvTuPzB6RDQrCjTc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861478; c=relaxed/simple; bh=z41+dvNtRXeKCs53H9ygm12hU6JxKhETApVU+NXe4U4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CbIkPmvmw+cSUGBhm1nW8kYLdxNvxjsQj2pAbodoc3nNjhFMiuLtfa16YuD0ai3WP2lMyITlTTR5y6NarjdR3pG7xLPIoVbL3nn7+F1+OhY+BBoyBLxk2I4haCbFc5R1dWq913GWDcq75599xFXagAv1wc5GzIjp/G//GKgZyGg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nh3cUfB1; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcd1779adbeso1549935276.3 for ; Tue, 13 Feb 2024 13:57:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707861475; x=1708466275; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FeTp34JzZtDcGzw26Xy4JuyVdUXeLDl6wgD9/498E4U=; b=nh3cUfB1Ln4GU+nYHusOfl2CI/raE/6IxRV2oZfUbNz9sPHFKb/yPT4r9YrUSFXbVZ K5g8gb7dZ4PejMDDxrZBNbYGaFkYAAF2kSOT1eLH9EY+ul/2HfzferF0/KiPFbOHIALo iJR03JFwDfGhgYZvwQRh1I/mAj3xOVxTJ37/PrhHeQDUnbJ+HkKRp+JFgWzCRSAgVEOr cipcqRAulXwyLNLkX9eVdQtOqZf8LPis8OlCt22FFyQcs8BWiX8CZlIVQM1BqOVO+fVO WDMoOD3PGyI0nEDSrFGGa49aFICpirpJG2GqjKtf1UniRwSVxQfe7Wjxbsltosc5tukc CP/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707861475; x=1708466275; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FeTp34JzZtDcGzw26Xy4JuyVdUXeLDl6wgD9/498E4U=; b=aIXbt3hevYcf0xctuFbPtP4iaZZ5Ig+b8QxWRpyra+OEARr+U189NRfKszM2Kpxdyr 6eCJmRTYPJ5LrBhr1694hFcvmx2xD+bfVK+351XWHm/b/Oa1scGiFJ1rIqdsONLp+b7x d3X0AkE/ulVTfwWWiPIiEtTbwctsa91qs3dS5lzPU0BfdEBs5KuewrIxv4dllrxaK5nT 0RFUzZs9EubR2CnhqGrOTtVRj6qaOFxYDTAPhwdFMyebpwwnrXxfHwAbi5EAlo165Bca KNMgyi6yCtZFLGAxKMBEf6psW2WzL/QWSivss/Ml+nx8JzLzJILWJPPWGeylje/x/JaU bOLg== X-Forwarded-Encrypted: i=1; AJvYcCXQcWDQ6DtpsGrRIeuMWPIJyFOCs5M5n91fiD4QAMckPueuepGkwxf0qJF6M727RursP39p3QrLoxJrkubv+teAyiFtHb9Dt9A+fBMv X-Gm-Message-State: AOJu0YyFbK7WB6lrdzuwzVhDjIGJqxfuRauX4wLD3o0h1nznNv8wdl8S WzffI5d4brm4nn0C4BDaVLCIct6o5e1gqU57spKf0rfMY0XNwthvoXNxtMKOMt1FjzKyfGbd6HE xiVMvP2U2Xvs9GPeUTqhnww== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:ce6c:821f:a756:b4b8]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:1021:b0:dc6:fa35:b42 with SMTP id x1-20020a056902102100b00dc6fa350b42mr95095ybt.2.1707861475235; Tue, 13 Feb 2024 13:57:55 -0800 (PST) Date: Tue, 13 Feb 2024 13:57:39 -0800 In-Reply-To: <20240213215741.3816570-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240213215741.3816570-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240213215741.3816570-2-lokeshgidra@google.com> Subject: [PATCH v6 1/3] userfaultfd: move userfaultfd_ctx struct to header file From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790824483359680689 X-GMAIL-MSGID: 1790824483359680689 Moving the struct to userfaultfd_k.h to be accessible from mm/userfaultfd.c. There are no other changes in the struct. This is required to prepare for using per-vma locks in userfaultfd operations. Signed-off-by: Lokesh Gidra Reviewed-by: Mike Rapoport (IBM) --- fs/userfaultfd.c | 39 ----------------------------------- include/linux/userfaultfd_k.h | 39 +++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 39 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 05c8e8a05427..58331b83d648 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -50,45 +50,6 @@ static struct ctl_table vm_userfaultfd_table[] = { static struct kmem_cache *userfaultfd_ctx_cachep __ro_after_init; -/* - * Start with fault_pending_wqh and fault_wqh so they're more likely - * to be in the same cacheline. - * - * Locking order: - * fd_wqh.lock - * fault_pending_wqh.lock - * fault_wqh.lock - * event_wqh.lock - * - * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, - * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's - * also taken in IRQ context. - */ -struct userfaultfd_ctx { - /* waitqueue head for the pending (i.e. not read) userfaults */ - wait_queue_head_t fault_pending_wqh; - /* waitqueue head for the userfaults */ - wait_queue_head_t fault_wqh; - /* waitqueue head for the pseudo fd to wakeup poll/read */ - wait_queue_head_t fd_wqh; - /* waitqueue head for events */ - wait_queue_head_t event_wqh; - /* a refile sequence protected by fault_pending_wqh lock */ - seqcount_spinlock_t refile_seq; - /* pseudo fd refcounting */ - refcount_t refcount; - /* userfaultfd syscall flags */ - unsigned int flags; - /* features requested from the userspace */ - unsigned int features; - /* released */ - bool released; - /* memory mappings are changing because of non-cooperative event */ - atomic_t mmap_changing; - /* mm with one ore more vmas attached to this userfaultfd_ctx */ - struct mm_struct *mm; -}; - struct userfaultfd_fork_ctx { struct userfaultfd_ctx *orig; struct userfaultfd_ctx *new; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index e4056547fbe6..691d928ee864 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -36,6 +36,45 @@ #define UFFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK) #define UFFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS) +/* + * Start with fault_pending_wqh and fault_wqh so they're more likely + * to be in the same cacheline. + * + * Locking order: + * fd_wqh.lock + * fault_pending_wqh.lock + * fault_wqh.lock + * event_wqh.lock + * + * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, + * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's + * also taken in IRQ context. + */ +struct userfaultfd_ctx { + /* waitqueue head for the pending (i.e. not read) userfaults */ + wait_queue_head_t fault_pending_wqh; + /* waitqueue head for the userfaults */ + wait_queue_head_t fault_wqh; + /* waitqueue head for the pseudo fd to wakeup poll/read */ + wait_queue_head_t fd_wqh; + /* waitqueue head for events */ + wait_queue_head_t event_wqh; + /* a refile sequence protected by fault_pending_wqh lock */ + seqcount_spinlock_t refile_seq; + /* pseudo fd refcounting */ + refcount_t refcount; + /* userfaultfd syscall flags */ + unsigned int flags; + /* features requested from the userspace */ + unsigned int features; + /* released */ + bool released; + /* memory mappings are changing because of non-cooperative event */ + atomic_t mmap_changing; + /* mm with one ore more vmas attached to this userfaultfd_ctx */ + struct mm_struct *mm; +}; + extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason); /* A combined operation mode + behavior flags. */