From patchwork Sat Apr 15 09:07:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83653 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp920162vqo; Sat, 15 Apr 2023 02:09:19 -0700 (PDT) X-Google-Smtp-Source: AKy350bRT7yEnLJd5+O/m9GsO1rBCnonQr4jJ18cIjunu7xyXrr+2ygsOKjGO+OKKwgF2iAvXMPD X-Received: by 2002:a17:902:d68b:b0:1a2:185d:4eef with SMTP id v11-20020a170902d68b00b001a2185d4eefmr5638730ply.10.1681549759415; Sat, 15 Apr 2023 02:09:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681549759; cv=none; d=google.com; s=arc-20160816; b=RxERJWl7YyI4FVgzUdxMs3tK349BQOBNbU9HfKtHM8i5K4krvywPgpa7QTBzc4qnXa Y2vjpMXoOe1g/qCa91JO0WYOs4xNrLw0+z5jfAg6GPaxnoQN4CtNFtl1sWQnV0JKr9Xf J5Xd9KK8Rq4VZbCr/7nBwBOIb6W9F4Oz3+ACs+Ec3wPkiHVeas6tJZACxRtSGQUFWF7W Z2Ndiy0uBSwEjl7lXXgLm+xnJC6TxOoX1CcR0qxHa5xEsKyPJ16qLUvrk17KyEuqZOZO FHwGLPNSb6JuruxgDX4519uLvFlnJZr+QIurvQd/UoN8EdrxyDtKOdkT7/zxwe32zk26 QXTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7bl5SFQ2vJygxB6ULQ7owEqW0w5QgVbsAK3PYNhUFh4=; b=vx8YGepKNxDgyWsnkeXAGMWkcQgWo7Gnzoae5ydiL4OqdQQmgsFxe1reo08vBt717r PqEUSWcPk9pOz2puMv5eTGrtoCbln1Rb5veI2h9swAA3UsZ9ggxiK3R+aWUVyhTDQgx9 C5waYLuXSFxA+yN1eTYOQaVM65VS5chHevHlKbZxZGpaI0VizMWRB6X0MwEY9g8ugGMd kmCxh7D4r6VjYDccfz5DcKo7pK5gS2jwgEl6CiLeCyCfE0uvDd8eKO1Pq8ldEvNVVa83 uO3PYnk3uxQ6pFNVPLodiP5xHyKzCCC5VoKKWVLawurJsFf6pR5UAW0qZWydLiKO3jB2 bqUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b="Jkb/Rwuy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d11-20020a170902728b00b001a68fd39aa9si5090481pll.424.2023.04.15.02.09.04; Sat, 15 Apr 2023 02:09:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b="Jkb/Rwuy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229894AbjDOJH0 (ORCPT + 99 others); Sat, 15 Apr 2023 05:07:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229505AbjDOJHY (ORCPT ); Sat, 15 Apr 2023 05:07:24 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC15E49D7; Sat, 15 Apr 2023 02:07:22 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-3f09f954af5so1975115e9.0; Sat, 15 Apr 2023 02:07:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681549641; x=1684141641; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7bl5SFQ2vJygxB6ULQ7owEqW0w5QgVbsAK3PYNhUFh4=; b=Jkb/RwuyQAbnlJlQn7Ok9K6ysDcucnhfzgw7G5w8aSSVNjDR+WBPABDauc9LFLFBJo zPGBJmaRq2et6/2hkuaVgdxHUT0/BJh4kgXgG33RNm0+V01GgqIiAQPxnnQqkRm7+U91 bqupoltukCsAJWxubjliYzRLKMFcneSZVHXpiqukgjB1AZ8HNc7/DDbrCX2h3m+VebEh 0VOtNMcN/tqU9WccT6t3PBFNxDtN3tUp4Urwsl6gYKm9p9Sioi1jr9hOVNcWlWsd1nIJ Tfvcsd2vY2G0chLsgUjMxS3X4BDFKfEP/lO7IIXE/1sUPFhokQozktJLbZf74pr4l6ge +csA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681549641; x=1684141641; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7bl5SFQ2vJygxB6ULQ7owEqW0w5QgVbsAK3PYNhUFh4=; b=D2OO5maPM7eB9JmM4JnJq+id1Rh4m0irw3BtwSH6+crTiY0i1iuXSJn2XCKEToQBiK BTTzwjPWbkfTpHzfWdGDDQgUccnX9B+MoIwXBcD6lAMjkrvehMNA3lc2l9Og31Zl2qGv 4KlTV+RoQqd3S+QOe3NxdlP1L3WNpC4TBo8nb5fcbwoEmMDC9ZqIFJQWwc86bExvIrwi MEn4ShNQMJdbpc07MtakMcP9bWpEA83FT6ryj8vof6irKnUiXhR8SKAwSTF999rXWiuN +56zW5hw+FTx7qvPwOhVbHEE5XvtMJT6DHLMB0usWcm4y4yuyaS+2n7TEmnj9A9R0tQJ m9sg== X-Gm-Message-State: AAQBX9eFePeES5Rn5DVdSJiRoKWob7pQX6eUTOBteT+14buGWMyxMqCr D9LWOfwJK3bPbrvnbYDjzkQ= X-Received: by 2002:a05:6000:189:b0:2f6:b273:ee26 with SMTP id p9-20020a056000018900b002f6b273ee26mr1034441wrx.14.1681549641195; Sat, 15 Apr 2023 02:07:21 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id u8-20020a5d6ac8000000b002eaac3a9beesm5359059wrw.8.2023.04.15.02.07.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Apr 2023 02:07:20 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , x86@kernel.org, linux-sgx@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Jarkko Sakkinen , "H . Peter Anvin" , Xinhui Pan , David Airlie , Daniel Vetter , Dimitri Sivanich , Arnd Bergmann , Greg Kroah-Hartman , Paolo Bonzini , Lorenzo Stoakes Subject: [PATCH v2 1/7] mm/gup: remove unused vmas parameter from get_user_pages() Date: Sat, 15 Apr 2023 10:07:09 +0100 Message-Id: <56b3f7360ac4ba3af3f75903a873f1e48df652e0.1681547405.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763232720211091006?= X-GMAIL-MSGID: =?utf-8?q?1763232720211091006?= No invocation of get_user_pages() uses the vmas parameter, so remove it. The GUP API is confusing and caveated. Recent changes have done much to improve that, however there is more we can do. Exporting vmas is a prime target as the caller has to be extremely careful to preclude their use after the mmap_lock has expired or otherwise be left with dangling pointers. Removing the vmas parameter focuses the GUP functions upon their primary purpose - pinning (and outputting) pages as well as performing the actions implied by the input flags. This is part of a patch series aiming to remove the vmas parameter altogether. Signed-off-by: Lorenzo Stoakes Suggested-by: Matthew Wilcox (Oracle) Acked-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/sgx/ioctl.c | 2 +- drivers/gpu/drm/radeon/radeon_ttm.c | 2 +- drivers/misc/sgi-gru/grufault.c | 2 +- include/linux/mm.h | 3 +-- mm/gup.c | 9 +++------ mm/gup_test.c | 5 ++--- virt/kvm/kvm_main.c | 2 +- 7 files changed, 10 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 21ca0a831b70..5d390df21440 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -214,7 +214,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, if (!(vma->vm_flags & VM_MAYEXEC)) return -EACCES; - ret = get_user_pages(src, 1, 0, &src_page, NULL); + ret = get_user_pages(src, 1, 0, &src_page); if (ret < 1) return -EFAULT; diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 1e8e287e113c..0597540f0dde 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -362,7 +362,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_device *bdev, struct ttm_tt *ttm struct page **pages = ttm->pages + pinned; r = get_user_pages(userptr, num_pages, write ? FOLL_WRITE : 0, - pages, NULL); + pages); if (r < 0) goto release_pages; diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index b836936e9747..378cf02a2aa1 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -185,7 +185,7 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma, #else *pageshift = PAGE_SHIFT; #endif - if (get_user_pages(vaddr, 1, write ? FOLL_WRITE : 0, &page, NULL) <= 0) + if (get_user_pages(vaddr, 1, write ? FOLL_WRITE : 0, &page) <= 0) return -EFAULT; *paddr = page_to_phys(page); put_page(page); diff --git a/include/linux/mm.h b/include/linux/mm.h index 37554b08bb28..b14cc4972d0b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2380,8 +2380,7 @@ long pin_user_pages_remote(struct mm_struct *mm, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas); + unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); diff --git a/mm/gup.c b/mm/gup.c index 1f72a717232b..7e454d6b157e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2251,8 +2251,6 @@ long get_user_pages_remote(struct mm_struct *mm, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * * This is the same as get_user_pages_remote(), just with a less-flexible * calling convention where we assume that the mm being operated on belongs to @@ -2260,16 +2258,15 @@ long get_user_pages_remote(struct mm_struct *mm, * obviously don't pass FOLL_REMOTE in here. */ long get_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas) + unsigned int gup_flags, struct page **pages) { int locked = 1; - if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_TOUCH)) + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - vmas, &locked, gup_flags); + NULL, &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages); diff --git a/mm/gup_test.c b/mm/gup_test.c index 8ae7307a1bb6..9ba8ea23f84e 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -139,8 +139,7 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i); break; case GUP_BASIC_TEST: - nr = get_user_pages(addr, nr, gup->gup_flags, pages + i, - NULL); + nr = get_user_pages(addr, nr, gup->gup_flags, pages + i); break; case PIN_FAST_BENCHMARK: nr = pin_user_pages_fast(addr, nr, gup->gup_flags, @@ -161,7 +160,7 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i, NULL); else nr = get_user_pages(addr, nr, gup->gup_flags, - pages + i, NULL); + pages + i); break; default: ret = -EINVAL; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d255964ec331..7f31e0a4adb5 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2474,7 +2474,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) { int rc, flags = FOLL_HWPOISON | FOLL_WRITE; - rc = get_user_pages(addr, 1, flags, NULL, NULL); + rc = get_user_pages(addr, 1, flags, NULL); return rc == -EHWPOISON; } From patchwork Sat Apr 15 09:08:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83654 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp920210vqo; Sat, 15 Apr 2023 02:09:25 -0700 (PDT) X-Google-Smtp-Source: AKy350aHEyLhAQCDciWFEqVkbPpK9vdCV2xB75Xh8y92Xa1E6d7bSsobBhNshHRBmkjBa3NKtt0L X-Received: by 2002:a17:902:ec88:b0:1a6:9f9b:131b with SMTP id x8-20020a170902ec8800b001a69f9b131bmr5695497plg.51.1681549764872; Sat, 15 Apr 2023 02:09:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681549764; cv=none; d=google.com; s=arc-20160816; b=H9lxhuqnzvzBRNbENb5A9lU88IECv8VLHCfiyfHAjuIIIgX4JLFsQK6yks0Z/U7L5k J/7nswO8k3e0Dkh+SK+VngLm3aiD0DD7IOKmyrbRVJzLFqOlEcsxv0TEKsXF60iWajoX 6YIuMjeURu4nk5GxoqMpoCAMhUadddhwl07l6O8YfF0Swtgd1hw96McaMFq5luovx0Dy mTMm+JiX8lKT3ePSOt4P3tQCNBk+o3RzSCwDCUAPQR1opg3m7RuyRW05qwHuDVQfwQIQ IcO0mlS5h1LMurXDP7Jn98mxsnhxDreOzR4LrgQqa2jo+3G73HTpHPm3GsHBbpJ5Wf5D uFZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=J+LSahXs7vXHVGQn1uT2igLaiQzqbh9LfXfVKe7mEnQ=; b=ERaKsG7YT9yw3pG8Ntfy9G/6+Z0/dE+nj+qhGQvWkrGLVNhlB0sQMpKew5tK/HOTnD 5hFJ8n4+roxWrh6G0bmr+NVROtGNzGFzfGHec+dT1WZrvDRUqtjgFcUtf9l0fcI9AKhE y/l4JlvmB6Fg2HdTKXz5lCrAYX/ApEpP63t18hgx5o9DnGwxJJwqXXzXCIi/JmiyFDXT aIOSv8iU3CNm+Jhw0iD295xYJ0LA7dW4Pf1Dd0YlDw+bTRgXeFJNEghJtjmB68ZtPxGN iMlXr7+1aF4jSbevpc4ReXu/8fzwq9XRTS85PXmhKmqeuYtStoQ2gu3YKh6EW0JKvQcz tD1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=aaBU96Ki; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n6-20020a170903110600b0019f28eff6f9si7135674plh.502.2023.04.15.02.09.10; Sat, 15 Apr 2023 02:09:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=aaBU96Ki; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229636AbjDOJIi (ORCPT + 99 others); Sat, 15 Apr 2023 05:08:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229505AbjDOJIh (ORCPT ); Sat, 15 Apr 2023 05:08:37 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DB9B49D7; Sat, 15 Apr 2023 02:08:36 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id n42-20020a05600c3baa00b003f0b12814aaso2171612wms.0; Sat, 15 Apr 2023 02:08:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681549714; x=1684141714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J+LSahXs7vXHVGQn1uT2igLaiQzqbh9LfXfVKe7mEnQ=; b=aaBU96KigAuWjJFM596TLA0njsd4CcaEKGbQ3wHULm3FUlHtgzTCyGDhowFPT++iSW MSDxV70HUFBewCPLHdXGdlAdNSFZ2QHM4BuZOWLhabEOa8OpeehQN57s3BUDDoK9spkH t/mC67DAUUPnUuCCmjpDHqOJqjNQHzchn32p+RLHbm5+G7bJCo/+D0U1ld+EKsUxpDJa o7ONzkpaOxHVYQ3KPloV+uHoEgKwIYqbmEdFiaDoeHgpPNWMEIHsijRd5LrD6JizDysX 5fIbBUnSZCIIAaDbh76y0cBT0g5/VD/RlillTzQSP9llXM+nuh/teQkLE5uP1eqzvY9B FB2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681549714; x=1684141714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J+LSahXs7vXHVGQn1uT2igLaiQzqbh9LfXfVKe7mEnQ=; b=KFj8z1WLm1vpbm1nGlWa+jQDwtZInIITaztWnKOsf+FEVyWyEQ6nBrPybNzxgGE3Gm ZY1bNVesi9UrHfiq2HF7DAU4Vht4HpDmmkgHxe0tx12IJTdSt0/z8VnBeNgJyzT4uFUa r0ae/4nJlWAMhtnZs1rNljN+C8vLvznIgZHt9C/8SHiM6i4a9o2ci9BK2SphqFIu8uvy Z33GtnFWxa4j+uFKyXjFmZl4TPSzZ4tddNEYFm4ljoHRInX78/S3u9eb1l8srIxOd2uJ Qi1pM5TGD3kpRAnhXOelQt34casvJdZKAJug+MR1vSzo2m8NGcqqzN5JOrYvE05/VZak uQLA== X-Gm-Message-State: AAQBX9e3jQuho9/gKbwTByjfZ0eBE9X6kAQI8rbW8Ryfp40fbcqfmapa Viwwl5MXX0lFtxh0irjt+0Y+wxgBlDGQwQ== X-Received: by 2002:a7b:ce8c:0:b0:3f0:b194:4912 with SMTP id q12-20020a7bce8c000000b003f0b1944912mr3243201wmj.19.1681549714401; Sat, 15 Apr 2023 02:08:34 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id g8-20020a05600c310800b003eddc6aa5fasm9876591wmo.39.2023.04.15.02.08.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Apr 2023 02:08:33 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , iommu@lists.linux.dev Cc: Matthew Wilcox , David Hildenbrand , kvm@vger.kernel.org, Jason Gunthorpe , Kevin Tian , Joerg Roedel , Will Deacon , Robin Murphy , Alex Williamson , Lorenzo Stoakes Subject: [PATCH v2 2/7] mm/gup: remove unused vmas parameter from pin_user_pages_remote() Date: Sat, 15 Apr 2023 10:08:30 +0100 Message-Id: X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763232726066533009?= X-GMAIL-MSGID: =?utf-8?q?1763232726066533009?= No invocation of pin_user_pages_remote() uses the vmas parameter, so remove it. This forms part of a larger patch set eliminating the use of the vmas parameters altogether. Signed-off-by: Lorenzo Stoakes --- drivers/iommu/iommufd/pages.c | 4 ++-- drivers/vfio/vfio_iommu_type1.c | 2 +- include/linux/mm.h | 2 +- mm/gup.c | 8 +++----- mm/process_vm_access.c | 2 +- 5 files changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index f8d92c9bb65b..9d55a2188a64 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -786,7 +786,7 @@ static int pfn_reader_user_pin(struct pfn_reader_user *user, user->locked = 1; } rc = pin_user_pages_remote(pages->source_mm, uptr, npages, - user->gup_flags, user->upages, NULL, + user->gup_flags, user->upages, &user->locked); } if (rc <= 0) { @@ -1787,7 +1787,7 @@ static int iopt_pages_rw_page(struct iopt_pages *pages, unsigned long index, rc = pin_user_pages_remote( pages->source_mm, (uintptr_t)(pages->uptr + index * PAGE_SIZE), 1, (flags & IOMMUFD_ACCESS_RW_WRITE) ? FOLL_WRITE : 0, &page, - NULL, NULL); + NULL); mmap_read_unlock(pages->source_mm); if (rc != 1) { if (WARN_ON(rc >= 0)) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 493c31de0edb..e6dc8fec3ed5 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -562,7 +562,7 @@ static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr, mmap_read_lock(mm); ret = pin_user_pages_remote(mm, vaddr, npages, flags | FOLL_LONGTERM, - pages, NULL, NULL); + pages, NULL); if (ret > 0) { int i; diff --git a/include/linux/mm.h b/include/linux/mm.h index b14cc4972d0b..ec9875c59f6d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2378,7 +2378,7 @@ long get_user_pages_remote(struct mm_struct *mm, long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked); + int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, diff --git a/mm/gup.c b/mm/gup.c index 7e454d6b157e..931c805bc32b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3093,8 +3093,6 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); * @gup_flags: flags modifying lookup behaviour * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: pointer to lock flag indicating whether lock is held and * subsequently whether VM_FAULT_RETRY functionality can be * utilised. Lock must initially be held. @@ -3109,14 +3107,14 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { int local_locked = 1; - if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) return 0; - return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, + return __gup_longterm_locked(mm, start, nr_pages, pages, NULL, locked ? locked : &local_locked, gup_flags); } diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 78dfaf9e8990..0523edab03a6 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -104,7 +104,7 @@ static int process_vm_rw_single_vec(unsigned long addr, mmap_read_lock(mm); pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, - NULL, &locked); + &locked); if (locked) mmap_read_unlock(mm); if (pinned_pages <= 0) From patchwork Sat Apr 15 09:08:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83655 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp920414vqo; Sat, 15 Apr 2023 02:09:55 -0700 (PDT) X-Google-Smtp-Source: AKy350b6Ev8uB/uYSdSjDOQGVydKVKXDS01B86MfsTMzmnutESxpmHBTPzJWQ0zioSRoONM42I8D X-Received: by 2002:a05:6a20:b38a:b0:eb:f56b:944a with SMTP id eg10-20020a056a20b38a00b000ebf56b944amr7652728pzb.27.1681549794749; Sat, 15 Apr 2023 02:09:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681549794; cv=none; d=google.com; s=arc-20160816; b=N8QHrI2glmCk80nNZgAGHQM42dFeOYeMlsXLUPlN09TG9gRqlqWIC7cYE9WBN0oh33 gKWRr8ILhSZiikjK004g7LmnsxnMEDf+X5e6KZJS/ccYNo+vdh0JAG434IcBdfo5m/Bm XKZ8ItItR2HDjjoUfHqUid7o8OzqQjPoB5QSLoGiZEy6GPQr8Ddp+TQLImswTcbJ2W/K od3UTOID/Fxxid6FPpIbdgZ+vhTBGB3moSzRaccnZzbczUNyRBahqh/KdByPyjVPd2Qc sN/CfsJQ/4wG5SX+oaL7PUXY97XeoBg24r/X5SkxYELqGQv8t2LtY7LAL23nJlmTPaig 2omA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YU5XBdBhZNlo0+3G/D0/kUTKBCmlg8gN20o0h9fnHXU=; b=YKEfcYh2a27DMwV+dfi4v8XZGqZdT/4VWznqp8davisvjFxYOqrfxtacO7bj+jtVm/ 81enH2JtrJLlWC1E3iNd6GKHOcKMSWUI30ElvXPDxQvLAh2dV7tjAD4NZGTb74wWaHhJ MMmg8seQKpe7tA4/afTkO8P4OJvdutxh80nhmrLsSiDZfEinJJl+ymKBMLMq3Pz5PJre nMvRwcg+iczkQkYZWFaaCq0wl7K6dGlcAieNF8zd4sQqtN/cibXru7t4NWHcz2IhTjHW B9I6LXQYCc9OOKbP+RXOYCE6+ZnDDIknHPbrikjJAG8dkwc/rAKtnCUuquj5c9oW4bP2 K9iA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=ku3pBj0L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i68-20020a625447000000b0063b2399c58asi6461809pfb.195.2023.04.15.02.09.39; Sat, 15 Apr 2023 02:09:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=ku3pBj0L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229990AbjDOJIs (ORCPT + 99 others); Sat, 15 Apr 2023 05:08:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229907AbjDOJIn (ORCPT ); Sat, 15 Apr 2023 05:08:43 -0400 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F50E6EA1; Sat, 15 Apr 2023 02:08:41 -0700 (PDT) Received: by mail-wr1-x432.google.com with SMTP id ffacd0b85a97d-2f7a7f9667bso169420f8f.1; Sat, 15 Apr 2023 02:08:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681549719; x=1684141719; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YU5XBdBhZNlo0+3G/D0/kUTKBCmlg8gN20o0h9fnHXU=; b=ku3pBj0LZFTzpWzE2T7M2Fle9zjSMkPWeX/32sk89A1RSbYjMPlEGhQxMICGWk5SQi nR+g2yRd3vpG2cOlbvKce7ZJ9tsgUHh2mV5kwjfqUoQ6x65Obima1ac4D/nRUUH7uy8s oLZrAxikJ+/VvRJnKyhqRoDFfJoPW2HUPxoI1qXYSrkyDsj+j6xQExzqXZ6bSeS4bkyH yTGp6ILrhnzoJM5CtX6vjYxLGUM/4X/qQJG8Juh8sLhxyW/EgdztysGkRcAcG+Ouh8Ky +Izr89EdK5ZVRg+7vEtR6Od4LFaRAEWH0TD6HTTthlMqQZAo/Rpl9t5Cmnu1yrbpXR6b bSaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681549719; x=1684141719; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YU5XBdBhZNlo0+3G/D0/kUTKBCmlg8gN20o0h9fnHXU=; b=LmlnQdgg4hEdo3FcTrExfCq9oLcpgX0iiOhr/AJej9mwA0aLmY3tbLZwtM5C61G/ZY dvMyserl8VgW1iyI96o7nSlU6OoZM3mpbEywAAGvTtHdTaShZGPXtEGD9bIcXsRLeCeg Dp5UQPkeYyBIlspSKVqpguWvzaMsDFKYtxs7xLjszMED1EjkbjoR7tPMRwEGqzN3v1dA amZTxbLXxUV3fK5mwOR8fUJPLMTJOpnN4BNC6czrg8YntcOyW2yWBJmWI0Q0spDTODXo fHTiqVh/4dPPKRiyBunj6S7mtZB5MhRui6XverY9CP1oXT+s+myCHuMFSur/IVxqyYJM By8w== X-Gm-Message-State: AAQBX9dX7dI/HyBodNSWZFVqAorX9nkSD1NqtC3U88qfPmODesaSTR1y hsBWticxpnSsh//Hv0RUImA= X-Received: by 2002:adf:e0c3:0:b0:2cf:e747:b0d4 with SMTP id m3-20020adfe0c3000000b002cfe747b0d4mr1119861wri.40.1681549719045; Sat, 15 Apr 2023 02:08:39 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id v3-20020adfe4c3000000b002f459afc809sm5325162wrm.72.2023.04.15.02.08.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Apr 2023 02:08:38 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, Catalin Marinas , Will Deacon , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Eric Biederman , Kees Cook , Alexander Viro , Christian Brauner , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Kentaro Takeda , Tetsuo Handa , Paul Moore , James Morris , "Serge E . Hallyn" , Paolo Bonzini , Lorenzo Stoakes Subject: [PATCH v2 3/7] mm/gup: remove vmas parameter from get_user_pages_remote() Date: Sat, 15 Apr 2023 10:08:34 +0100 Message-Id: <631001ecc556c5e348ff4f47719334c31f7bd592.1681547405.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763232757323544321?= X-GMAIL-MSGID: =?utf-8?q?1763232757323544321?= The only instances of get_user_pages_remote() invocations which used the vmas parameter were for a single page which can instead simply look up the VMA directly. In particular:- - __update_ref_ctr() looked up the VMA but did nothing with it so we simply remove it. - __access_remote_vm() was already using vma_lookup() when the original lookup failed so by doing the lookup directly this also de-duplicates the code. This forms part of a broader set of patches intended to eliminate the vmas parameter altogether. Signed-off-by: Lorenzo Stoakes Signed-off-by: Lorenzo Stoakes --- arch/arm64/kernel/mte.c | 7 ++++--- arch/s390/kvm/interrupt.c | 2 +- fs/exec.c | 2 +- include/linux/mm.h | 2 +- kernel/events/uprobes.c | 12 +++++++----- mm/gup.c | 12 ++++-------- mm/memory.c | 9 +++++---- mm/rmap.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 3 +-- 10 files changed, 26 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index f5bcb0dc6267..d43a744d7919 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -419,7 +419,6 @@ long get_mte_ctrl(struct task_struct *task) static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, struct iovec *kiov, unsigned int gup_flags) { - struct vm_area_struct *vma; void __user *buf = kiov->iov_base; size_t len = kiov->iov_len; int ret; @@ -432,12 +431,13 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, return -EIO; while (len) { + struct vm_area_struct *vma; unsigned long tags, offset; void *maddr; struct page *page = NULL; ret = get_user_pages_remote(mm, addr, 1, gup_flags, &page, - &vma, NULL); + NULL); if (ret <= 0) break; @@ -448,7 +448,8 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, * would cause the existing tags to be cleared if the page * was never mapped with PROT_MTE. */ - if (!(vma->vm_flags & VM_MTE)) { + vma = vma_lookup(mm, addr); + if (!vma || !(vma->vm_flags & VM_MTE)) { ret = -EOPNOTSUPP; put_page(page); break; diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index 9250fde1f97d..c19d0cb7d2f2 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2777,7 +2777,7 @@ static struct page *get_map_page(struct kvm *kvm, u64 uaddr) mmap_read_lock(kvm->mm); get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, - &page, NULL, NULL); + &page, NULL); mmap_read_unlock(kvm->mm); return page; } diff --git a/fs/exec.c b/fs/exec.c index 87cf3a2f0e9a..d8d48ee15aac 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -219,7 +219,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos, */ mmap_read_lock(bprm->mm); ret = get_user_pages_remote(bprm->mm, pos, 1, gup_flags, - &page, NULL, NULL); + &page, NULL); mmap_read_unlock(bprm->mm); if (ret <= 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index ec9875c59f6d..1bfe73a2b6d3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2374,7 +2374,7 @@ extern int __access_remote_vm(struct mm_struct *mm, unsigned long addr, long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked); + int *locked); long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 59887c69d54c..b21993cd2dcc 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -365,7 +365,6 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) { void *kaddr; struct page *page; - struct vm_area_struct *vma; int ret; short *ptr; @@ -373,7 +372,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) return -EINVAL; ret = get_user_pages_remote(mm, vaddr, 1, - FOLL_WRITE, &page, &vma, NULL); + FOLL_WRITE, &page, NULL); if (unlikely(ret <= 0)) { /* * We are asking for 1 page. If get_user_pages_remote() fails, @@ -475,10 +474,14 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, - &old_page, &vma, NULL); + &old_page, NULL); if (ret <= 0) return ret; + vma = vma_lookup(mm, vaddr); + if (!vma) + goto put_old; + ret = verify_opcode(old_page, vaddr, &opcode); if (ret <= 0) goto put_old; @@ -2027,8 +2030,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, - NULL, NULL); + result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, NULL); if (result < 0) return result; diff --git a/mm/gup.c b/mm/gup.c index 931c805bc32b..9440aa54c741 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2165,8 +2165,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: pointer to lock flag indicating whether lock is held and * subsequently whether VM_FAULT_RETRY functionality can be * utilised. Lock must initially be held. @@ -2181,8 +2179,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, * * The caller is responsible for releasing returned @pages, via put_page(). * - * @vmas are valid only as long as mmap_lock is held. - * * Must be called with mmap_lock held for read or write. * * get_user_pages_remote walks a process's page tables and takes a reference @@ -2219,15 +2215,15 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { int local_locked = 1; - if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; - return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, NULL, locked ? locked : &local_locked, gup_flags); } @@ -2237,7 +2233,7 @@ EXPORT_SYMBOL(get_user_pages_remote); long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { return 0; } diff --git a/mm/memory.c b/mm/memory.c index 8ddb10199e8d..913e693322f2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5591,7 +5591,9 @@ int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, struct page *page = NULL; ret = get_user_pages_remote(mm, addr, 1, - gup_flags, &page, &vma, NULL); + gup_flags, &page, NULL); + vma = vma_lookup(mm, addr); + if (ret <= 0) { #ifndef CONFIG_HAVE_IOREMAP_PROT break; @@ -5600,7 +5602,6 @@ int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, * Check if this is a VM_IO | VM_PFNMAP VMA, which * we can access using slightly different code. */ - vma = vma_lookup(mm, addr); if (!vma) break; if (vma->vm_ops && vma->vm_ops->access) @@ -5617,11 +5618,11 @@ int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, bytes = PAGE_SIZE-offset; maddr = kmap(page); - if (write) { + if (write && vma) { copy_to_user_page(vma, page, addr, maddr + offset, buf, bytes); set_page_dirty_lock(page); - } else { + } else if (vma) { copy_from_user_page(vma, page, addr, buf, maddr + offset, bytes); } diff --git a/mm/rmap.c b/mm/rmap.c index ba901c416785..756ea8a9bb90 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2324,7 +2324,7 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, npages = get_user_pages_remote(mm, start, npages, FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD, - pages, NULL, NULL); + pages, NULL); if (npages < 0) return npages; diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 31af29f669d2..ac20c0bdff9d 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -916,7 +916,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned long pos, */ mmap_read_lock(bprm->mm); ret = get_user_pages_remote(bprm->mm, pos, 1, - FOLL_FORCE, &page, NULL, NULL); + FOLL_FORCE, &page, NULL); mmap_read_unlock(bprm->mm); if (ret <= 0) return false; diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 9bfe1d6f6529..e033c79d528e 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -61,8 +61,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ mmap_read_lock(mm); - get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, - &locked); + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked); if (locked) mmap_read_unlock(mm); From patchwork Sat Apr 15 09:08:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83656 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp920442vqo; Sat, 15 Apr 2023 02:09:58 -0700 (PDT) X-Google-Smtp-Source: AKy350ZVEn1iMX6HmzvSEdyIx+ZGAniTxcj7uWe2aNtFuVKC6+4FvELc1SpUP+XX1A1R87vRhbqM X-Received: by 2002:a17:90b:17c9:b0:246:a228:1359 with SMTP id me9-20020a17090b17c900b00246a2281359mr13701261pjb.23.1681549797927; Sat, 15 Apr 2023 02:09:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681549797; cv=none; d=google.com; s=arc-20160816; b=pyQKPU93ttjvSIGGGneczthcL2/Dx0xQlZ2Krmb6fK3h9fxuFEHjS8tqZIJZ5+hE94 +kudLFLvJRRI/V6GSEEjzd9mTQ/fJ8saGO6OJLjTsl/4VR03jCIQu3l3iYMQgK2P+syh 9cPBJQx0rqV4aDZh+RgYb7E+LL7aIAtJ+0rFBJdoeMp8EmVBAwC1wJVw6NlcAsnG05+o 543YPobImQsbnf2nHURlAly+edxvzUPHEmTLADBFzLQOweIYILOxegsH6XBllycJJNMB up/bhzT2Xfidw90KkRJAQ5MacZ7wlg02orzCUw5MC7MjnIicVKXLWQsLCGg2bqsJ6CrC WCsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=N3KXMf3LCMhF+M2dvABgEBrAeCcDHD/N5/gKopMor18=; b=g0Q/9p4EseBX/H+GnTwuMaxgxQTT0VM4lRaf0YBCwDwqFqCrpjuNEt2TH+QVtmXYTT dbmjgcjaH2EgAt0R3eTvbxdPs8KV3BzXzTsjSijVMSYxaSsCnQRxehZ4DZ/7QyPtXZYY 07XHHr23HARK1c7jNbF+sjbIe3AFR10Q/eu+WYtkA6uLBi/ud/f060gxsRQjVmLcjEgZ Cec93x+2BLTnMIhAFCS7GgrxvB4qMLT0yPbawwM7ZAv9lhvBFs/DRBbG8DzSic0d9eJa CoW7i9LDuVZTG4prrC68qax/67uS+ByqNTALLFvytnsVFpQ0RkxDS9hQ7PJ+6w3XpEjZ 4JHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=SYa2oo+t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id me12-20020a17090b17cc00b002473cc6f8fdsi4328852pjb.54.2023.04.15.02.09.43; Sat, 15 Apr 2023 02:09:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=SYa2oo+t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229907AbjDOJIw (ORCPT + 99 others); Sat, 15 Apr 2023 05:08:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229931AbjDOJIp (ORCPT ); Sat, 15 Apr 2023 05:08:45 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B4545B8F for ; Sat, 15 Apr 2023 02:08:44 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-3f09df54430so1972465e9.1 for ; Sat, 15 Apr 2023 02:08:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681549722; x=1684141722; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N3KXMf3LCMhF+M2dvABgEBrAeCcDHD/N5/gKopMor18=; b=SYa2oo+tbd38XlYtz/d6/e4pzQ6P9G6OsJ4dhx3B8+SYR/d9O8tSkD2RPu9kJexrpZ i26FJrReB1ZKb/fpTLOOe4FKafPWtUANQys34PP6Hnve60y2Edj85EK6A8FT939vLW7t o6ImUDfeJQnyy7P8bgYoVtDkMFNfwWzWzKnP/Wz49VshkCfOapwdRglzz4DIe/Plmpod LFm6mrdZ0ZmnCmhhSbSkELW3VN2PopBHBPlJyrHZEdjrruFmqrfV9Kpe6V2rBm5oVmdm St4ps/8jKsHniSU28kM6ig2cNONUd5+UmdF5K+kJ4vQvz5Am44dTz2FexI5QiejlF0Tb tCFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681549722; x=1684141722; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N3KXMf3LCMhF+M2dvABgEBrAeCcDHD/N5/gKopMor18=; b=Z5D/iM8s/YGmVCJVKScKMNuhTKWumH9aTllqbZc6DJz99DalMtsBSQvR2+vQxWlTyy OWDQGcbpTgYDO9et1SLMDirZMhSX+hNl5qitmqyLghEiovF4W48rg4OSfyY66I8O139y iDFO1hoiX0dZetl0u5Esho6i7eKlcI8BQli7ixP4zDbHMgD0ZeirnTGdm49CELlZrdSq NAesyBGVuBQkNZoGrlliqqO7m2TBtUl0DtxC5AXJl84s8/UVevEKmhn7ipqgXL69vgyz d7fXVmEIAEh+Xfhr+AY260cAPCM1Fvh7vMUcntsjXRAzEjYOzDKZwjNlEIMdz/gHiuu0 O3/g== X-Gm-Message-State: AAQBX9dDUud4fQ/rlKsU/jivohga7I0+qrOHsOGGJFrehwdS6TgbZ+xk T/wcBkdyzGo57TR4Vdq9j6E= X-Received: by 2002:a5d:4c4d:0:b0:2cf:e67c:8245 with SMTP id n13-20020a5d4c4d000000b002cfe67c8245mr1122023wrt.44.1681549722284; Sat, 15 Apr 2023 02:08:42 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id r16-20020a056000015000b002f21a96c161sm5364756wrx.70.2023.04.15.02.08.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Apr 2023 02:08:41 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Lorenzo Stoakes Subject: [PATCH v2 4/7] mm/gup: introduce the FOLL_SAME_FILE GUP flag Date: Sat, 15 Apr 2023 10:08:39 +0100 Message-Id: <45702c7b87c8487455783b482b67dfcf2d24e6a9.1681547405.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763232760965901025?= X-GMAIL-MSGID: =?utf-8?q?1763232760965901025?= This flag causes GUP to assert that all VMAs within the input range possess the same vma->vm_file. If not, the operation fails. This is part of a patch series which eliminates the vmas parameter from the GUP API, implementing the one remaining assertion within the entire kernel that requires access to the VMAs associated with a GUP range. Signed-off-by: Lorenzo Stoakes --- include/linux/mm_types.h | 2 ++ mm/gup.c | 16 ++++++++++++---- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3fc9e680f174..84d1aec9dbab 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1185,6 +1185,8 @@ enum { FOLL_PCI_P2PDMA = 1 << 10, /* allow interrupts from generic signals */ FOLL_INTERRUPTIBLE = 1 << 11, + /* assert that the range spans VMAs with the same vma->vm_file */ + FOLL_SAME_FILE = 1 << 12, /* See also internal only FOLL flags in mm/internal.h */ }; diff --git a/mm/gup.c b/mm/gup.c index 9440aa54c741..3954ce499a4a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -959,7 +959,8 @@ static int faultin_page(struct vm_area_struct *vma, return 0; } -static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) +static int check_vma_flags(struct vm_area_struct *vma, struct file *file, + unsigned long gup_flags) { vm_flags_t vm_flags = vma->vm_flags; int write = (gup_flags & FOLL_WRITE); @@ -968,7 +969,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if (vm_flags & (VM_IO | VM_PFNMAP)) return -EFAULT; - if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) + if ((gup_flags & FOLL_ANON) && !vma_is_anonymous(vma)) return -EFAULT; if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) @@ -977,6 +978,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if (vma_is_secretmem(vma)) return -EFAULT; + if ((gup_flags & FOLL_SAME_FILE) && vma->vm_file != file) + return -EFAULT; + if (write) { if (!(vm_flags & VM_WRITE)) { if (!(gup_flags & FOLL_FORCE)) @@ -1081,6 +1085,7 @@ static long __get_user_pages(struct mm_struct *mm, long ret = 0, i = 0; struct vm_area_struct *vma = NULL; struct follow_page_context ctx = { NULL }; + struct file *file = NULL; if (!nr_pages) return 0; @@ -1111,10 +1116,13 @@ static long __get_user_pages(struct mm_struct *mm, ret = -EFAULT; goto out; } - ret = check_vma_flags(vma, gup_flags); + ret = check_vma_flags(vma, i == 0 ? vma->vm_file : file, + gup_flags); if (ret) goto out; + file = vma->vm_file; + if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, @@ -1595,7 +1603,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, * We want to report -EINVAL instead of -EFAULT for any permission * problems or incompatible mappings. */ - if (check_vma_flags(vma, gup_flags)) + if (check_vma_flags(vma, vma->vm_file, gup_flags)) return -EINVAL; ret = __get_user_pages(mm, start, nr_pages, gup_flags, From patchwork Sat Apr 15 09:08:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83657 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp920508vqo; Sat, 15 Apr 2023 02:10:07 -0700 (PDT) X-Google-Smtp-Source: AKy350bqTfZ5B3T+fMZvm9VxekEISmpxlGWSyNPpJpx/POPMDKMKTeMaPQNII4xfa6oGk/4FXTdT X-Received: by 2002:a17:903:1106:b0:1a6:74f6:fa92 with SMTP id n6-20020a170903110600b001a674f6fa92mr7088947plh.19.1681549807519; Sat, 15 Apr 2023 02:10:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681549807; cv=none; d=google.com; s=arc-20160816; b=dFW9GmYJUotJXZuOErwH08UcmwHxSAwHbeR945eKZBdxzTGmF6gblT1rl8bRF1Ntvh olbUWKwi+Dfj35RDz7aX70XM4lV0FFUWsWDHTaWbJnoPqZ+K3FyZFgOBqCWlMaGpRm6N PD/LgbmVKvUQuoU9f3E7uPat5hQ9rTWWcWf527lXsR/nbDqT8ik6waKDkfpjoC7Idr5S TphzVc58h116ctNmxqWL57u1mMbjqLtjhc7d7ZvA8YkdrXVceIJ9GjobL1CmArugJvsV ivl+vucSBMaNHWcyl8XUBnSoUN3IkIpyI1UnwN581xZFVKv14QDpOqJ7xFv0iQmiM/4R bW7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8GS8ed+C9TZSDKqE7yJs3rXmG1XU+tjMZxFV1EPCP0I=; b=ptGbt4isxjHw2HMZgxfidDhkov0Bvhyt2ZQmWFzuIwp5V0BMWYQERpLFJlNshbl7RN BgnqnhUW5ppktlmJilVneIOVEyzs+OyBoVZqIJ5dCaoSgznRd5MQ6kgeCBBeqTGPYDsM cvC3ADSd7z4yF6iVAxxRyZ7ekOKhK0jsHGS6iln6KZAx0QD3yUNQiMbA9Vz0V5VD9lpE 58JQggJt0sOiGoxjucWpviFnUtdm2sZIBwHaxa14dFGkRwbQTe2IbgIW9fpNKvhu/6qZ t+40PfMyKBH57/b3bG9cVbnTLNjOEgWDwT7ujChsK/OmOSO7nw5n8BiDEvIhdfmmAD/o XlVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b="b454/ADS"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n6-20020a170903110600b0019f28eff6f9si7135674plh.502.2023.04.15.02.09.52; Sat, 15 Apr 2023 02:10:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b="b454/ADS"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229893AbjDOJJK (ORCPT + 99 others); Sat, 15 Apr 2023 05:09:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230017AbjDOJIu (ORCPT ); Sat, 15 Apr 2023 05:08:50 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C79349740; Sat, 15 Apr 2023 02:08:47 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id d8-20020a05600c3ac800b003ee6e324b19so10819174wms.1; Sat, 15 Apr 2023 02:08:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681549726; x=1684141726; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8GS8ed+C9TZSDKqE7yJs3rXmG1XU+tjMZxFV1EPCP0I=; b=b454/ADSubR+UtZwD/vd9/bxU7eI6S5iNJ+h+0UiN9FgDora+ApBhUaT3SSlJPj557 Wr3AFdfbJaL+aizieInN5iM4WcDgKnRveo/2NyepsbPQmZF74KcuE7RuIOiyD6ggtXa6 ALOo/15uJ0W2QbV3lcS061wfIQnR61bEBb8pnD6aBXrCXX96Kdxx2HQDKFGkGZpb7EpW OmTyhghPGusO7/l9glNJkiPfr2rNIQjHZklmjuE1K1RDy4jndjREXz5eqSeEihKFhXsA LyQ/OMWU+Dr0qQaxFDPpttO2bKIZFUYKcoe8bc2bHU7zWvT8dbv3t9Daba//frSBg37T 2SgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681549726; x=1684141726; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8GS8ed+C9TZSDKqE7yJs3rXmG1XU+tjMZxFV1EPCP0I=; b=ZNNT0Urk+3pnlWYDyFbPhIRXG1+++6I23GhbdnJ3gJ15zcXvQ74SyN43UXt6zpCAMZ xnZ0UB5y24M8GFeeKqMyONthjXFQ1QqsXaPsNjeFVN3KrFKEkCuDyGsuKBxI+Ek41Scr ZKQ+iIGjFkkM+wUoSwEDCIE8CqqxBjvjDMU0VulvYpOM9t01fbBjvE7V2izhiYrrDtJM bu+7W75LEmdXn+ikVwDLM3Z/AU/AFeK44mA9Xv37pwrXLG79oCWCHPOeJyYP+/S52nPj 0zCgVJ4x263MvEqUsc2Plm9xAl0DTIn2meYiCcas6SMYnQOkWp2MZwqDiHSYOFBlddvO b3Dg== X-Gm-Message-State: AAQBX9c9Rny72eWFHcMs7B77cvUtnBikgkUdZuMVs1FOhr2laFmZceMs TJ2mHKlRF4cY8Dn0s08rFR4= X-Received: by 2002:a7b:c7c8:0:b0:3ed:b094:3c93 with SMTP id z8-20020a7bc7c8000000b003edb0943c93mr5887766wmk.23.1681549725732; Sat, 15 Apr 2023 02:08:45 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id c8-20020a05600c0a4800b003ee5fa61f45sm9990707wmq.3.2023.04.15.02.08.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Apr 2023 02:08:45 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Jens Axboe , Pavel Begunkov , io-uring@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH v2 5/7] io_uring: rsrc: use FOLL_SAME_FILE on pin_user_pages() Date: Sat, 15 Apr 2023 10:08:42 +0100 Message-Id: <362e96284273ef0781df0116b6491ce97b0fe073.1681547405.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763232770607962058?= X-GMAIL-MSGID: =?utf-8?q?1763232770607962058?= Commit edd478269640 ("io_uring/rsrc: disallow multi-source reg buffers") prevents io_pin_pages() from pinning pages spanning multiple VMAs with permitted characteristics (anon/huge), requiring that all VMAs share the same vm_file. The newly introduced FOLL_SAME_FILE flag permits this to be expressed as a GUP flag rather than having to retrieve VMAs to perform the check. We then only need to perform a VMA lookup for the first VMA to assert the anon/hugepage requirement as we know the rest of the VMAs will possess the same characteristics. Doing this eliminates the one instance of vmas being used by pin_user_pages(). Signed-off-by: Lorenzo Stoakes Suggested-by: Matthew Wilcox (Oracle) --- io_uring/rsrc.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 7a43aed8e395..adc860bcbd4f 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1141,9 +1141,8 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages, struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) { unsigned long start, end, nr_pages; - struct vm_area_struct **vmas = NULL; struct page **pages = NULL; - int i, pret, ret = -ENOMEM; + int pret, ret = -ENOMEM; end = (ubuf + len + PAGE_SIZE - 1) >> PAGE_SHIFT; start = ubuf >> PAGE_SHIFT; @@ -1153,31 +1152,26 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) if (!pages) goto done; - vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *), - GFP_KERNEL); - if (!vmas) - goto done; - ret = 0; mmap_read_lock(current->mm); - pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM, - pages, vmas); + + pret = pin_user_pages(ubuf, nr_pages, + FOLL_WRITE | FOLL_LONGTERM | FOLL_SAME_FILE, + pages, NULL); if (pret == nr_pages) { - struct file *file = vmas[0]->vm_file; + /* + * lookup the first VMA, we require that all VMAs in range + * maintain the same file characteristics, as enforced by + * FOLL_SAME_FILE + */ + struct vm_area_struct *vma = vma_lookup(current->mm, ubuf); + struct file *file; /* don't support file backed memory */ - for (i = 0; i < nr_pages; i++) { - if (vmas[i]->vm_file != file) { - ret = -EINVAL; - break; - } - if (!file) - continue; - if (!vma_is_shmem(vmas[i]) && !is_file_hugepages(file)) { - ret = -EOPNOTSUPP; - break; - } - } + file = vma->vm_file; + if (file && !vma_is_shmem(vma) && !is_file_hugepages(file)) + ret = -EOPNOTSUPP; + *npages = nr_pages; } else { ret = pret < 0 ? pret : -EFAULT; @@ -1194,7 +1188,6 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) } ret = 0; done: - kvfree(vmas); if (ret < 0) { kvfree(pages); pages = ERR_PTR(ret); From patchwork Sat Apr 15 09:08:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83660 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp929613vqo; Sat, 15 Apr 2023 02:32:05 -0700 (PDT) X-Google-Smtp-Source: AKy350Z3ck0H2lv6xhTN9AechJHERN7Fh1GmrN1opIC68JahLg/dH2EhpmoOna9Dto6bH8r0nE81 X-Received: by 2002:a05:6a00:15d1:b0:63b:59ad:dbd5 with SMTP id o17-20020a056a0015d100b0063b59addbd5mr9919506pfu.34.1681551124702; Sat, 15 Apr 2023 02:32:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681551124; cv=none; d=google.com; s=arc-20160816; b=nklTGmKDE1DwwU12tnICXehRn+ZL/s+4A7vdEu66cHM86xnnFXaeFgtu4CDCJPQhes QQn6IFYjgxe/fc0HoJx12r7BQ4uPCJ4D1UBwr5Ago72jHfh8RIswyxJeMMahmHG9BzyZ e1XLbal/h5AExzTPiqfwTIW876GD5slXlaTJTnuhb8wba8Dn1BLSlq4N6LuXnihlhBdi Bzwv1MOOWXF0Ae9qVgLlfYKLo6uk6utellWwx1GrFshgjVcqeZ1Gz5WST5Urzc8yHnTf XXmgxmdhsQqcTNqWWdRKZTdDlT9mFkwOD3nWTB7DeJnRA7z5sgToStWocjCefDyZDyGm rdkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=InhEM3EHiRkXrRv+rDOd/57bP6C2VpOkdZxgrLu5jHw=; b=daYKHAn994GYJ1FIHu1/+H6f8hbQr9X/cbh8LiwYgNto5lKKS7XnxCoapT15hDoJ8g 3KY0dyUNunPUHIc8BfRjd9+YNfD/RO8+K+m4xrxtGbivmFsVZwFXxr18P7VwNCDPUS/9 zSBhJKRJV1gibGXBxN9KM3V5qsrC28zo2TR6FLDvrsjTSw9gNPry5QHNyZ3SEdRPm60n 7TDIMZJaNMuJn9iuSw2bYWIM4zTLfuqmG6hgCHDYo+Z/gUy8cG95MNtUJNIhHGafqjFm 5o+KYxy/HpJeTBe8LUOwRnrioxN48qpfEXqpvoGAh1ztTKr8jTCpggD03WVu6S3mmLFg lOxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=nIsKl560; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t189-20020a6381c6000000b005184398f8d9si4941860pgd.457.2023.04.15.02.31.47; Sat, 15 Apr 2023 02:32:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=nIsKl560; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229920AbjDOJJX (ORCPT + 99 others); Sat, 15 Apr 2023 05:09:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230085AbjDOJJK (ORCPT ); Sat, 15 Apr 2023 05:09:10 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4775A27F; Sat, 15 Apr 2023 02:08:51 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-2f87c5b4635so189048f8f.1; Sat, 15 Apr 2023 02:08:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681549730; x=1684141730; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=InhEM3EHiRkXrRv+rDOd/57bP6C2VpOkdZxgrLu5jHw=; b=nIsKl560k8l1z5w+7Vj9euFCWZH9iEYRsj90660ZNXMvB3jyD9wfY0wtUFEPNkAAWr DM6hd7rU2kNk23Pc+1FSpvcnPwXlB0bt0zS1jjavZpkr6ffnt04azfiMIHD5DTwYuv4S C5XoRspwhOTPB2qwBSsm0HzAiNcEpf2KkiLZFtmKge8HFu/5PEZyeLddBuKAPjEXxn4T Khp3w3pCQoHTkm4L5NakurRQD9sY0L+KWM7tvBwGcJlRyNXUdQSC2YMJ0hNJy1q73DSk H1a9uPS+UUpLKnnhcDdM7CpraDeMo3FTa0RLWUsBx7q30BVtBuAXvfSO2aVDdxKRnJIn wppw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681549730; x=1684141730; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=InhEM3EHiRkXrRv+rDOd/57bP6C2VpOkdZxgrLu5jHw=; b=hFRIK7FWy23kfNieFrGKUFi6s5YObri4mDrOo/NPOwhuF69Li3La6jwLgL6gyWDGkw yFfOqvmrf5RO2F3y7cqS34+lakl2bgxT2SKoy0qOhLQQGSxS6J8CvlwWQfBMW6r2UItf OfYb53CrIrQxd/lDqR8x2puWd2i0z9je2kiCy03x8YjSvuevl19i/XiCLf0+/PP92wSN APrvWn477rhhkfIu6cl7188M+5qEh08CINWxtIYI1pHDK5Yye8hXGOUdopX6wHB6YPGE ym8DdyahxGAR4EAMf/WybCQGjAfcnMrO0YOGVvDfb8hZLo+bD47nd754kVrgecY3ta8Y UGxQ== X-Gm-Message-State: AAQBX9fM9PDbX5V3dsCZ0qcALnpy4RKPCZLg2XGAa/DByBiJesXcnztW T0zNwfIMaqYv741T4Jpne84= X-Received: by 2002:adf:f183:0:b0:2e4:d8d7:839a with SMTP id h3-20020adff183000000b002e4d8d7839amr1028458wro.43.1681549729931; Sat, 15 Apr 2023 02:08:49 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id j4-20020a5d5644000000b002f02df4c7a3sm5338874wrw.30.2023.04.15.02.08.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Apr 2023 02:08:49 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Dennis Dalessandro , Jason Gunthorpe , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Mauro Carvalho Chehab , "Michael S . Tsirkin" , Jason Wang , Jens Axboe , Pavel Begunkov , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linuxppc-dev@lists.ozlabs.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, netdev@vger.kernel.org, io-uring@vger.kernel.org, bpf@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH v2 6/7] mm/gup: remove vmas parameter from pin_user_pages() Date: Sat, 15 Apr 2023 10:08:45 +0100 Message-Id: <925661e55664dd65a6aaa9f60e96bd0d71ed8197.1681547405.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763234152013722525?= X-GMAIL-MSGID: =?utf-8?q?1763234152013722525?= After the introduction of FOLL_SAME_FILE we no longer require vmas for any invocation of pin_user_pages(), so eliminate this parameter from the function and all callers. This clears the way to removing the vmas parameter from GUP altogether. Signed-off-by: Lorenzo Stoakes --- arch/powerpc/mm/book3s64/iommu_api.c | 2 +- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- drivers/infiniband/sw/siw/siw_mem.c | 2 +- drivers/media/v4l2-core/videobuf-dma-sg.c | 2 +- drivers/vdpa/vdpa_user/vduse_dev.c | 2 +- drivers/vhost/vdpa.c | 2 +- include/linux/mm.h | 3 +-- io_uring/rsrc.c | 2 +- mm/gup.c | 9 +++------ mm/gup_test.c | 9 ++++----- net/xdp/xdp_umem.c | 2 +- 12 files changed, 17 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 81d7185e2ae8..d19fb1f3007d 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -105,7 +105,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, FOLL_WRITE | FOLL_LONGTERM, - mem->hpages + entry, NULL); + mem->hpages + entry); if (ret == n) { pinned += n; continue; diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index f693bc753b6b..1bb7507325bc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -111,7 +111,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, ret = pin_user_pages(start_page + got * PAGE_SIZE, num_pages - got, FOLL_LONGTERM | FOLL_WRITE, - p + got, NULL); + p + got); if (ret < 0) { mmap_read_unlock(current->mm); goto bail_release; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 2a5cac2658ec..84e0f41e7dfa 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -140,7 +140,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, ret = pin_user_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), - gup_flags, page_list, NULL); + gup_flags, page_list); if (ret < 0) goto out; diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index f51ab2ccf151..e6e25f15567d 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -422,7 +422,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) umem->page_chunk[i].plist = plist; while (nents) { rv = pin_user_pages(first_page_va, nents, foll_flags, - plist, NULL); + plist); if (rv < 0) goto out_sem_up; diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 53001532e8e3..405b89ea1054 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -180,7 +180,7 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, data, size, dma->nr_pages); err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, gup_flags, - dma->pages, NULL); + dma->pages); if (err != dma->nr_pages) { dma->nr_pages = (err >= 0) ? err : 0; diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c index 0c3b48616a9f..1f80254604f0 100644 --- a/drivers/vdpa/vdpa_user/vduse_dev.c +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -995,7 +995,7 @@ static int vduse_dev_reg_umem(struct vduse_dev *dev, goto out; pinned = pin_user_pages(uaddr, npages, FOLL_LONGTERM | FOLL_WRITE, - page_list, NULL); + page_list); if (pinned != npages) { ret = pinned < 0 ? pinned : -ENOMEM; goto out; diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 7be9d9d8f01c..4317128c1c62 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -952,7 +952,7 @@ static int vhost_vdpa_pa_map(struct vhost_vdpa *v, while (npages) { sz2pin = min_t(unsigned long, npages, list_size); pinned = pin_user_pages(cur_base, sz2pin, - gup_flags, page_list, NULL); + gup_flags, page_list); if (sz2pin != pinned) { if (pinned < 0) { ret = pinned; diff --git a/include/linux/mm.h b/include/linux/mm.h index 1bfe73a2b6d3..363e3d0d46f4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2382,8 +2382,7 @@ long pin_user_pages_remote(struct mm_struct *mm, long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas); + unsigned int gup_flags, struct page **pages); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags); long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index adc860bcbd4f..92d0d47e322c 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1157,7 +1157,7 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM | FOLL_SAME_FILE, - pages, NULL); + pages); if (pret == nr_pages) { /* * lookup the first VMA, we require that all VMAs in range diff --git a/mm/gup.c b/mm/gup.c index 3954ce499a4a..714970ef3b30 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3132,8 +3132,6 @@ EXPORT_SYMBOL(pin_user_pages_remote); * @gup_flags: flags modifying lookup behaviour * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * * Nearly the same as get_user_pages(), except that FOLL_TOUCH is not set, and * FOLL_PIN is set. @@ -3142,15 +3140,14 @@ EXPORT_SYMBOL(pin_user_pages_remote); * see Documentation/core-api/pin_user_pages.rst for details. */ long pin_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas) + unsigned int gup_flags, struct page **pages) { int locked = 1; - if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) return 0; return __gup_longterm_locked(current->mm, start, nr_pages, - pages, vmas, &locked, gup_flags); + pages, NULL, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages); diff --git a/mm/gup_test.c b/mm/gup_test.c index 9ba8ea23f84e..1668ce0e0783 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -146,18 +146,17 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i); break; case PIN_BASIC_TEST: - nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i, - NULL); + nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i); break; case PIN_LONGTERM_BENCHMARK: nr = pin_user_pages(addr, nr, gup->gup_flags | FOLL_LONGTERM, - pages + i, NULL); + pages + i); break; case DUMP_USER_PAGES_TEST: if (gup->test_flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) nr = pin_user_pages(addr, nr, gup->gup_flags, - pages + i, NULL); + pages + i); else nr = get_user_pages(addr, nr, gup->gup_flags, pages + i); @@ -270,7 +269,7 @@ static inline int pin_longterm_test_start(unsigned long arg) gup_flags, pages); else cur_pages = pin_user_pages(addr, remaining_pages, - gup_flags, pages, NULL); + gup_flags, pages); if (cur_pages < 0) { pin_longterm_test_stop(); ret = cur_pages; diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c index 02207e852d79..06cead2b8e34 100644 --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -103,7 +103,7 @@ static int xdp_umem_pin_pages(struct xdp_umem *umem, unsigned long address) mmap_read_lock(current->mm); npgs = pin_user_pages(address, umem->npgs, - gup_flags | FOLL_LONGTERM, &umem->pgs[0], NULL); + gup_flags | FOLL_LONGTERM, &umem->pgs[0]); mmap_read_unlock(current->mm); if (npgs != umem->npgs) { From patchwork Sat Apr 15 09:09:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83659 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp921701vqo; Sat, 15 Apr 2023 02:12:43 -0700 (PDT) X-Google-Smtp-Source: AKy350ZsyYb2w2LhCfd3uuuhA1OB8zWbtgbBv1BRb6X0QlrQhyK5UODl+v54csd8vkheEM5VUODV X-Received: by 2002:a05:6a20:be2f:b0:e7:568e:73d with SMTP id ge47-20020a056a20be2f00b000e7568e073dmr8138962pzb.56.1681549963199; Sat, 15 Apr 2023 02:12:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681549963; cv=none; d=google.com; s=arc-20160816; b=bEDrPeMu9CjqC05osz+IYgr2Ug8aUkaIOFUfxuiLJ6sx2PzCNiDhNdje+3CU4N+qMp 4fAC7BbmcN4aJGOydN/o34HqFpUc0wjM0d/ODnAYykcRZ45Xh3Lurg6ivumU932AnOba BlWzcPeC3Q6E/O4vp9pQ0FH6eSyQhD1n0xA2yGnKr5y6XEnnnEjHoZKhx9+PtSbD8R5B 4ZpLa3I1/iSaC9WpCb+OOWQ5e8xRmY/CBscYLdhsSfUeZDIEqiNjh7QiX2Xm85GMftsQ K7IYWMuSk9imjfJv/FDYc0hko0SEp4fCi1gOE7Zhp/M0WqN/aTkyPwK1FZEdAPMwnt/0 7Znw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=uCs/sZPWJX1gtmxe2QOizxEtYJfYeMRPWTDbu9h9OZw=; b=ilK9icsVCw5QvfuZ1AVEDztxgbQcHX2nC30EG2yWPnpvBVcLMfPytASX5pjUr7Cba7 KSJ4VsAARDfWCDLeBNyLn++Wow+sRKrRtN/xvEmgeCI+usNcLMoWjpMU6AehmVrZ1Oo6 HreaOn1dqOFtqLwxftJnwhM/FMtsMhZ+2PFR6AkhUx9LXZ8sqqbm/5we9uGOkkptDDuF ZzbjsmzxMcyieMVJk2qqr/zX9qgMh4v4IQ3EUQqnY0no4/UtRb2NQNoEspkPHfgvBrAn P7itYPocmcTUe6JG+bQ5b+DB4KavgNcXxNTHG04TgIWfHoBdG+EvUcRyPE3Oglwi2YP3 dX/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=YTQO+iWo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c18-20020a056a00009200b00627d5397423si6478364pfj.145.2023.04.15.02.12.28; Sat, 15 Apr 2023 02:12:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=YTQO+iWo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229847AbjDOJKO (ORCPT + 99 others); Sat, 15 Apr 2023 05:10:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230016AbjDOJKE (ORCPT ); Sat, 15 Apr 2023 05:10:04 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7763DAD39 for ; Sat, 15 Apr 2023 02:09:27 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id j5so482679wms.0 for ; Sat, 15 Apr 2023 02:09:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681549750; x=1684141750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uCs/sZPWJX1gtmxe2QOizxEtYJfYeMRPWTDbu9h9OZw=; b=YTQO+iWo3f2qqyoKeNDowVPMfwemO3xyLyOC791uWRwSO29wCXFZgDt+8wvyy7WFcG mfpoJeyOCpNMJqKK1rvzB8UyR1wzl953GfmYmE1CzuZAnbAIZ6QxV+hxtcd+wNJKaJjv ba8gOKoyKJfYRfDFsoHQ2pGqK9lnEDa+bjW6MXYEcT3sUXfoCS2+euyFajZefJfoBnM+ XBJ4X2R33lln1mIjIiXQjcCq37w7k9t/L6yDuFB4R8ZQvsYTEM+CULJkEqmYmkz3AZH1 DM0xvbG+6GUQE2d5IDwTgYc6ZIVgcygAvblKSkRyHHI5GzlWjHwx44ZdIL2YxHY12GUb zqLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681549750; x=1684141750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uCs/sZPWJX1gtmxe2QOizxEtYJfYeMRPWTDbu9h9OZw=; b=ajc3zwTZ+vXu6Qm29X5o4B9gZ81ZdE8+mk8PNnvLT/j/UWpdDlO5Ji4rSxfNCO7AWL Ft3a28GrsO7zaai72uckihwmRlUMz6NDoXTO5/IUPHN3FRKgVB5ZL63GVXkv0i+sINO+ pCUZhsXUOJh0MXDd4/lx+2Lvem9RHtVJ5RU4LmpXD2zqcJSzhWKXhjlp7tV2ovwK4ZRL w56SuHOs71G4kntw7YO2VSB/j3qzbqmAwPKvgTtcdoWU16o40myZzpYB5J+GzTw3W7fp nsEsHmtsdVfmUD0ULCHmcpGW6Is6Fdvqs1SUEAgwSNAAf1zyG2LsAS81xeg9nrp1xnXb b9Ig== X-Gm-Message-State: AAQBX9fWrxBM5nUEsAbC3Arv9A065tKqkDgnkpQqcM+wq+T5RLk20ndn UJ903IZPTCWoPvJDkE48IAc= X-Received: by 2002:a05:600c:2148:b0:3f1:69cc:475b with SMTP id v8-20020a05600c214800b003f169cc475bmr944291wml.36.1681549750341; Sat, 15 Apr 2023 02:09:10 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id h13-20020a05600c2cad00b003ede2c4701dsm9774671wmc.14.2023.04.15.02.09.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Apr 2023 02:09:09 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Mike Kravetz , Muchun Song , Lorenzo Stoakes Subject: [PATCH v2 7/7] mm/gup: remove vmas array from internal GUP functions Date: Sat, 15 Apr 2023 10:09:04 +0100 Message-Id: <313e2b8ef74eb78fd1a34d21f0882fce12d9fdab.1681547405.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763232933861346115?= X-GMAIL-MSGID: =?utf-8?q?1763232933861346115?= Now we have eliminated all callers to GUP APIs which use the vmas parameter, eliminate it altogether. This eliminates a class of bugs where vmas might have been kept around longer than the mmap_lock and thus we need not be concerned about locks being dropped during this operation leaving behind dangling pointers. This simplifies the GUP API and makes it considerably clearer as to its purpose - follow flags are applied and if pinning, an array of pages is returned. Signed-off-by: Lorenzo Stoakes --- include/linux/hugetlb.h | 10 ++--- mm/gup.c | 83 +++++++++++++++-------------------------- mm/hugetlb.c | 24 +++++------- 3 files changed, 45 insertions(+), 72 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 28703fe22386..2735e7a2b998 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -141,9 +141,8 @@ int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, - struct page **, struct vm_area_struct **, - unsigned long *, unsigned long *, long, unsigned int, - int *); + struct page **, unsigned long *, unsigned long *, + long, unsigned int, int *); void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); @@ -297,9 +296,8 @@ static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, static inline long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, - struct vm_area_struct **vmas, unsigned long *position, - unsigned long *nr_pages, long i, unsigned int flags, - int *nonblocking) + unsigned long *position, unsigned long *nr_pages, + long i, unsigned int flags, int *nonblocking) { BUG(); return 0; diff --git a/mm/gup.c b/mm/gup.c index 714970ef3b30..385e428a4acb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1028,8 +1028,6 @@ static int check_vma_flags(struct vm_area_struct *vma, struct file *file, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: whether we're still with the mmap_lock held * * Returns either number of pages pinned (which may be less than the @@ -1043,8 +1041,6 @@ static int check_vma_flags(struct vm_area_struct *vma, struct file *file, * * The caller is responsible for releasing returned @pages, via put_page(). * - * @vmas are valid only as long as mmap_lock is held. - * * Must be called with mmap_lock held. It may be released. See below. * * __get_user_pages walks a process's page tables and takes a reference to @@ -1080,7 +1076,7 @@ static int check_vma_flags(struct vm_area_struct *vma, struct file *file, static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { long ret = 0, i = 0; struct vm_area_struct *vma = NULL; @@ -1124,9 +1120,9 @@ static long __get_user_pages(struct mm_struct *mm, file = vma->vm_file; if (is_vm_hugetlb_page(vma)) { - i = follow_hugetlb_page(mm, vma, pages, vmas, - &start, &nr_pages, i, - gup_flags, locked); + i = follow_hugetlb_page(mm, vma, pages, + &start, &nr_pages, i, + gup_flags, locked); if (!*locked) { /* * We've got a VM_FAULT_RETRY @@ -1191,10 +1187,6 @@ static long __get_user_pages(struct mm_struct *mm, ctx.page_mask = 0; } next_page: - if (vmas) { - vmas[i] = vma; - ctx.page_mask = 0; - } page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm = nr_pages; @@ -1349,7 +1341,6 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, unsigned int flags) { @@ -1387,7 +1378,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, pages_done = 0; for (;;) { ret = __get_user_pages(mm, start, nr_pages, flags, pages, - vmas, locked); + locked); if (!(flags & FOLL_UNLOCKABLE)) { /* VM_FAULT_RETRY couldn't trigger, bypass */ pages_done = ret; @@ -1451,7 +1442,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, *locked = 1; ret = __get_user_pages(mm, start, 1, flags | FOLL_TRIED, - pages, NULL, locked); + pages, locked); if (!*locked) { /* Continue to retry until we succeeded */ BUG_ON(ret != 0); @@ -1549,7 +1540,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * not result in a stack expansion that recurses back here. */ ret = __get_user_pages(mm, start, nr_pages, gup_flags, - NULL, NULL, locked ? locked : &local_locked); + NULL, locked ? locked : &local_locked); lru_add_drain(); return ret; } @@ -1607,7 +1598,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, return -EINVAL; ret = __get_user_pages(mm, start, nr_pages, gup_flags, - NULL, NULL, locked); + NULL, locked); lru_add_drain(); return ret; } @@ -1675,8 +1666,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) #else /* CONFIG_MMU */ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, - unsigned int foll_flags) + int *locked, unsigned int foll_flags) { struct vm_area_struct *vma; bool must_unlock = false; @@ -1720,8 +1710,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, if (pages[i]) get_page(pages[i]); } - if (vmas) - vmas[i] = vma; + start = (start + PAGE_SIZE) & PAGE_MASK; } @@ -1902,8 +1891,7 @@ struct page *get_dump_page(unsigned long addr) int locked = 0; int ret; - ret = __get_user_pages_locked(current->mm, addr, 1, &page, NULL, - &locked, + ret = __get_user_pages_locked(current->mm, addr, 1, &page, &locked, FOLL_FORCE | FOLL_DUMP | FOLL_GET); return (ret == 1) ? page : NULL; } @@ -2076,7 +2064,6 @@ static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, unsigned int gup_flags) { @@ -2084,13 +2071,13 @@ static long __gup_longterm_locked(struct mm_struct *mm, long rc, nr_pinned_pages; if (!(gup_flags & FOLL_LONGTERM)) - return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, locked, gup_flags); flags = memalloc_pin_save(); do { nr_pinned_pages = __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, locked, + pages, locked, gup_flags); if (nr_pinned_pages <= 0) { rc = nr_pinned_pages; @@ -2108,9 +2095,8 @@ static long __gup_longterm_locked(struct mm_struct *mm, * Check that the given flags are valid for the exported gup/pup interface, and * update them with the required flags that the caller must have set. */ -static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, - int *locked, unsigned int *gup_flags_p, - unsigned int to_set) +static bool is_valid_gup_args(struct page **pages, int *locked, + unsigned int *gup_flags_p, unsigned int to_set) { unsigned int gup_flags = *gup_flags_p; @@ -2152,13 +2138,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, (gup_flags & FOLL_PCI_P2PDMA))) return false; - /* - * Can't use VMAs with locked, as locked allows GUP to unlock - * which invalidates the vmas array - */ - if (WARN_ON_ONCE(vmas && (gup_flags & FOLL_UNLOCKABLE))) - return false; - *gup_flags_p = gup_flags; return true; } @@ -2227,11 +2206,11 @@ long get_user_pages_remote(struct mm_struct *mm, { int local_locked = 1; - if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, + if (!is_valid_gup_args(pages, locked, &gup_flags, FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; - return __get_user_pages_locked(mm, start, nr_pages, pages, NULL, + return __get_user_pages_locked(mm, start, nr_pages, pages, locked ? locked : &local_locked, gup_flags); } @@ -2266,11 +2245,11 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, { int locked = 1; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - NULL, &locked, gup_flags); + &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages); @@ -2294,12 +2273,12 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_TOUCH | FOLL_UNLOCKABLE)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - NULL, &locked, gup_flags); + &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages_unlocked); @@ -2982,7 +2961,7 @@ static int internal_get_user_pages_fast(unsigned long start, start += nr_pinned << PAGE_SHIFT; pages += nr_pinned; ret = __gup_longterm_locked(current->mm, start, nr_pages - nr_pinned, - pages, NULL, &locked, + pages, &locked, gup_flags | FOLL_TOUCH | FOLL_UNLOCKABLE); if (ret < 0) { /* @@ -3024,7 +3003,7 @@ int get_user_pages_fast_only(unsigned long start, int nr_pages, * FOLL_FAST_ONLY is required in order to match the API description of * this routine: no fall back to regular ("slow") GUP. */ - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_GET | FOLL_FAST_ONLY)) return -EINVAL; @@ -3057,7 +3036,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, * FOLL_GET, because gup fast is always a "pin with a +1 page refcount" * request. */ - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_GET)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_GET)) return -EINVAL; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } @@ -3082,7 +3061,7 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast); int pin_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN)) return -EINVAL; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } @@ -3115,10 +3094,10 @@ long pin_user_pages_remote(struct mm_struct *mm, { int local_locked = 1; - if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, + if (!is_valid_gup_args(pages, locked, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) return 0; - return __gup_longterm_locked(mm, start, nr_pages, pages, NULL, + return __gup_longterm_locked(mm, start, nr_pages, pages, locked ? locked : &local_locked, gup_flags); } @@ -3144,10 +3123,10 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, { int locked = 1; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN)) return 0; return __gup_longterm_locked(current->mm, start, nr_pages, - pages, NULL, &locked, gup_flags); + pages, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages); @@ -3161,11 +3140,11 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_UNLOCKABLE)) return 0; - return __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages_unlocked); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a08fb47fb200..85138a0394b9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6371,17 +6371,14 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, - int refs, struct page **pages, - struct vm_area_struct **vmas) +static void record_subpages(struct page *page, struct vm_area_struct *vma, + int refs, struct page **pages) { int nr; for (nr = 0; nr < refs; nr++) { if (likely(pages)) pages[nr] = nth_page(page, nr); - if (vmas) - vmas[nr] = vma; } } @@ -6454,9 +6451,9 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, } long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, - struct page **pages, struct vm_area_struct **vmas, - unsigned long *position, unsigned long *nr_pages, - long i, unsigned int flags, int *locked) + struct page **pages, unsigned long *position, + unsigned long *nr_pages, long i, unsigned int flags, + int *locked) { unsigned long pfn_offset; unsigned long vaddr = *position; @@ -6584,7 +6581,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * If subpage information not requested, update counters * and skip the same_page loop below. */ - if (!pages && !vmas && !pfn_offset && + if (!pages && !pfn_offset && (vaddr + huge_page_size(h) < vma->vm_end) && (remainder >= pages_per_huge_page(h))) { vaddr += huge_page_size(h); @@ -6599,11 +6596,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, refs = min3(pages_per_huge_page(h) - pfn_offset, remainder, (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); - if (pages || vmas) - record_subpages_vmas(nth_page(page, pfn_offset), - vma, refs, - likely(pages) ? pages + i : NULL, - vmas ? vmas + i : NULL); + if (pages) + record_subpages(nth_page(page, pfn_offset), + vma, refs, + likely(pages) ? pages + i : NULL); if (pages) { /*