From patchwork Fri Apr 14 23:27:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83577 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp721990vqo; Fri, 14 Apr 2023 16:29:00 -0700 (PDT) X-Google-Smtp-Source: AKy350a4vhLFtT6qeyQ1YBPFqnqQ+xXXj6BJ13rpKbFmbQmYb0WNbyTTC8ac2/ThBl7KKMXWU5YQ X-Received: by 2002:a17:90a:420d:b0:247:122e:9e83 with SMTP id o13-20020a17090a420d00b00247122e9e83mr6690950pjg.49.1681514939996; Fri, 14 Apr 2023 16:28:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681514939; cv=none; d=google.com; s=arc-20160816; b=NeHbHByrF2ja3mgk80Kzu/suUmAy9tnfPG8A+CEh/lhuc+7bbCothuz+HXSLFrRLdz MGXpJgfZnICwv7wS2rBTtOI7oIvNTGbUHASHwj3ckHa3KVd+aiOF9X3GHKX+lslBlGes XUFnxdiNkCjwlK4vec8sOdq+jmYYYUwFz8Nr+j1tqbcb/cOMA9kECP8FhXLKZvIGKSpj rs7pmlPghvp7yWaFuSmoHTmRthhOU33SbZbab4roqyIhsnVdRBEr3d0b78L+8VFZTFV/ jxe58Fep8muKAIpbfQCTRfp9dyAOoth1JG07HDY2ZIRbRamc0safYBilM0ZBcEAafXE2 LqGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=d3R4ddapMeyDJ0P0S2alUCXwvGMhQM5Ib//pYtbrG/w=; b=t75s3s0Q4kYBWUQRjN8HubZeqRRWhgSPudQkTWNHW16sFeAuWaqrx9wBJZwcZAJHSH 9h2tejx88jf9kqOwiYWkx+41fNAFxXhZbzL8evLQnqotDe5uPKOzlTOGQLzcBDJHWuw4 8VQlD0VtIEy+2YDpzzJ0F/viELYfvLKhXI2DtMupXgKkdl8q08nkhsQF+V1tfrVrBwz1 aIFmDjRV2KeMIByW+mRPeQIy3EimLn0AW8IzQkUEgRVDmY51MncynQbToC+TDJAQQoYy kwoG2xuykh/bW8VRU+if0sF9Q4RhkHFRtN4gtawnYTedxtha1S7NKVdw41PI6mEryi47 HDNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=OrhEO0qz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l11-20020a63570b000000b00507681e1281si5470352pgb.525.2023.04.14.16.28.47; Fri, 14 Apr 2023 16:28:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=OrhEO0qz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229765AbjDNX12 (ORCPT + 99 others); Fri, 14 Apr 2023 19:27:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229574AbjDNX10 (ORCPT ); Fri, 14 Apr 2023 19:27:26 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 795B319A8; Fri, 14 Apr 2023 16:27:20 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id d8-20020a05600c3ac800b003ee6e324b19so10476559wms.1; Fri, 14 Apr 2023 16:27:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681514839; x=1684106839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=d3R4ddapMeyDJ0P0S2alUCXwvGMhQM5Ib//pYtbrG/w=; b=OrhEO0qzchVPk1vo055ED54OMMZb8uC8IB6ty9ne32vA6q3p/i/Loy2O7ARogcpEXP VMVaphBrhp44h5SgSp3DZ86wOIgWT+rI9JuCyEHkx7RN6GDYQPpHsHBCjO6tnFVQMyyk /ZaQuxTZw3jQdBiWDsQEXE1uUxxaMRv7AZ4rX22q46xssdyqszybHP/3DqZ2pThrTZYp nQDLTIniF9Dx4toFV/cVOV+dNRjoeRZdONRyClaNL5yo6fOkNbB4dSCiRnE9kkJ7B/Gb KdRBGi7M2dCX4QLRY2yfCUnIJWmhHw+cyyRH+ios/xJgAd1HGi7TJIf6/qdIk1pzmf24 lsdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681514839; x=1684106839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d3R4ddapMeyDJ0P0S2alUCXwvGMhQM5Ib//pYtbrG/w=; b=IYLYyA/6IGEqVSIp72edzo/POsHh7LWmTXeghmERq9qu6D7obbdJ/XDutqjD9y5qi+ lfqUhGVzQfrMGgBZ+TrhanLjb2mnt40YaNrapfvPEcjutpelu8HcPlAff77qKePbdyVQ g9zlgleZBpN14Iv0qGbKwQaLMPz1pISL5C4l5yMwoZ3liQ0H4bBMEcJ87tGpYgNyOfVj 60wD6128k+MjASWw424RstvEcgJyvJ+CqnXRqBOPMW5UdhvToEXwj22Gi/11+Jh+EUYE /A52FjhKGnYuidhW0S8Hmxfb8LPrdWpoWQH1MwyUpltxKhQh4pGHYHzbBc+UXc5pXgdO Up7Q== X-Gm-Message-State: AAQBX9cnjnROnjUHMX6CnAisNa4fYNpdDUtY5I5BfQl14uMOzVQFTD/z WGULUrO7phAhmoQbS9Tu2Sc= X-Received: by 2002:a7b:c7cd:0:b0:3ed:a82d:dffb with SMTP id z13-20020a7bc7cd000000b003eda82ddffbmr5377441wmk.40.1681514838694; Fri, 14 Apr 2023 16:27:18 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id f17-20020adff8d1000000b002ef222822d5sm4546656wrq.74.2023.04.14.16.27.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 16:27:17 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , x86@kernel.org, linux-sgx@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Jarkko Sakkinen , "H . Peter Anvin" , Xinhui Pan , David Airlie , Daniel Vetter , Dimitri Sivanich , Arnd Bergmann , Greg Kroah-Hartman , Paolo Bonzini , Lorenzo Stoakes Subject: [PATCH 1/7] mm/gup: remove unused vmas parameter from get_user_pages() Date: Sat, 15 Apr 2023 00:27:13 +0100 Message-Id: X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763196209805006173?= X-GMAIL-MSGID: =?utf-8?q?1763196209805006173?= No invocation of get_user_pages() uses the vmas parameter, so remove it. The GUP API is confusing and caveated. Recent changes have done much to improve that, however there is more we can do. Exporting vmas is a prime target as the caller has to be extremely careful to preclude their use after the mmap_lock has expired or otherwise be left with dangling pointers. Removing the vmas parameter focuses the GUP functions upon their primary purpose - pinning (and outputting) pages as well as performing the actions implied by the input flags. This is part of a patch series aiming to remove the vmas parameter altogether. Signed-off-by: Lorenzo Stoakes Suggested-by: Matthew Wilcox (Oracle) Acked-by: Greg Kroah-Hartman Reviewed-by: Jason Gunthorpe --- arch/x86/kernel/cpu/sgx/ioctl.c | 2 +- drivers/gpu/drm/radeon/radeon_ttm.c | 2 +- drivers/misc/sgi-gru/grufault.c | 2 +- include/linux/mm.h | 3 +-- mm/gup.c | 9 +++------ mm/gup_test.c | 5 ++--- virt/kvm/kvm_main.c | 4 ++-- 7 files changed, 11 insertions(+), 16 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c index 21ca0a831b70..5d390df21440 100644 --- a/arch/x86/kernel/cpu/sgx/ioctl.c +++ b/arch/x86/kernel/cpu/sgx/ioctl.c @@ -214,7 +214,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl, if (!(vma->vm_flags & VM_MAYEXEC)) return -EACCES; - ret = get_user_pages(src, 1, 0, &src_page, NULL); + ret = get_user_pages(src, 1, 0, &src_page); if (ret < 1) return -EFAULT; diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 1e8e287e113c..0597540f0dde 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -362,7 +362,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_device *bdev, struct ttm_tt *ttm struct page **pages = ttm->pages + pinned; r = get_user_pages(userptr, num_pages, write ? FOLL_WRITE : 0, - pages, NULL); + pages); if (r < 0) goto release_pages; diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index b836936e9747..378cf02a2aa1 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -185,7 +185,7 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma, #else *pageshift = PAGE_SHIFT; #endif - if (get_user_pages(vaddr, 1, write ? FOLL_WRITE : 0, &page, NULL) <= 0) + if (get_user_pages(vaddr, 1, write ? FOLL_WRITE : 0, &page) <= 0) return -EFAULT; *paddr = page_to_phys(page); put_page(page); diff --git a/include/linux/mm.h b/include/linux/mm.h index 5d5ba1556ae9..faeed36c2d04 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2380,8 +2380,7 @@ long pin_user_pages_remote(struct mm_struct *mm, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas); + unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); diff --git a/mm/gup.c b/mm/gup.c index 1f72a717232b..7e454d6b157e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2251,8 +2251,6 @@ long get_user_pages_remote(struct mm_struct *mm, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * * This is the same as get_user_pages_remote(), just with a less-flexible * calling convention where we assume that the mm being operated on belongs to @@ -2260,16 +2258,15 @@ long get_user_pages_remote(struct mm_struct *mm, * obviously don't pass FOLL_REMOTE in here. */ long get_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas) + unsigned int gup_flags, struct page **pages) { int locked = 1; - if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_TOUCH)) + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - vmas, &locked, gup_flags); + NULL, &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages); diff --git a/mm/gup_test.c b/mm/gup_test.c index 8ae7307a1bb6..9ba8ea23f84e 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -139,8 +139,7 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i); break; case GUP_BASIC_TEST: - nr = get_user_pages(addr, nr, gup->gup_flags, pages + i, - NULL); + nr = get_user_pages(addr, nr, gup->gup_flags, pages + i); break; case PIN_FAST_BENCHMARK: nr = pin_user_pages_fast(addr, nr, gup->gup_flags, @@ -161,7 +160,7 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i, NULL); else nr = get_user_pages(addr, nr, gup->gup_flags, - pages + i, NULL); + pages + i); break; default: ret = -EINVAL; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d255964ec331..2d2446df0900 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2474,7 +2474,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) { int rc, flags = FOLL_HWPOISON | FOLL_WRITE; - rc = get_user_pages(addr, 1, flags, NULL, NULL); + rc = get_user_pages(addr, 1, flags, NULL); return rc == -EHWPOISON; } From patchwork Fri Apr 14 23:27:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83582 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp724068vqo; Fri, 14 Apr 2023 16:34:39 -0700 (PDT) X-Google-Smtp-Source: AKy350YEEfxDLRhsRAlJuzBUermOcAISCIHggHHM9p2pG4QZCqpPKDTluktuKSxni0ZGG8pGdkTx X-Received: by 2002:a17:902:eac2:b0:1a1:b440:3773 with SMTP id p2-20020a170902eac200b001a1b4403773mr4171205pld.27.1681515279581; Fri, 14 Apr 2023 16:34:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681515279; cv=none; d=google.com; s=arc-20160816; b=NttnbPAeqXbPz3QAV/6z13BCpZKEFd9CUmWCoXPb8kgS+d5ZR00tDr629emRz5e6xf nF8FfPtpsYFfOsk/tbXdzkMYzkIqOuE8/fyMnUJLiutl/HS3LxWt68MRkZvD7dUF9hXN 7DWqp+2vJZ46HzQeF4xSkFK/ppApYEBMeGuiTjPSTVWsaf8ZQgUmKiejkwf+r+A+EfNS xpnYTO3i/+vHoxu5oM0aBtyHw+3/P1qvHZztWVBGTqS+cco5KSPUlQegKiyd34jOm1mP jFQBLDkVW5GdO2Ka1yldbSiqF/gg7X8z9Gn89RIHPfSYYj6xiI8rM28y0h5yJ4E35VOR N3LA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1VJscWXDoV805lUglV9Lx4G4y0RL2KQ+qVhOtcQn4So=; b=Wr+vzHFyf63yyRHVhI0b7RJyiSXU4PIn6SzCbQK6E/F52jDMSwiDPNpYELgz/oHt7i ybwy72QJke+tspIVlMs4xndWn9z9qi1uFmbt9Gc5ZA8mFvBoOdpXRG6kIFp/XOMklGmk ELEIIMH98U7vlK/otkBrXwZqD0mp6Lxb6nsXVM6fanb36fpmIf7TCBkTHKYIrVLIoIW0 iicpC7xn7v/6T+3M/FT9F1T2a5DJmfwEj+IN+avB8UuY2ngYXJyiOngWs2CAkJMJnpoO T1/CIbhrd+ks1MCGOStjYeQZ5DCu1crKMUPAtdX9xPR63M2m+wTTbd1eIBmT6LQ4Yf0g 13bw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=TQhDiOv0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p24-20020a170902a41800b001a1b75674bdsi5359716plq.207.2023.04.14.16.34.26; Fri, 14 Apr 2023 16:34:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=TQhDiOv0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230046AbjDNX1g (ORCPT + 99 others); Fri, 14 Apr 2023 19:27:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229840AbjDNX1f (ORCPT ); Fri, 14 Apr 2023 19:27:35 -0400 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A0A51BC2; Fri, 14 Apr 2023 16:27:33 -0700 (PDT) Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-2efbab47633so65553f8f.2; Fri, 14 Apr 2023 16:27:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681514852; x=1684106852; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1VJscWXDoV805lUglV9Lx4G4y0RL2KQ+qVhOtcQn4So=; b=TQhDiOv0KwrX1cl+HMRY/m9AlRUvTYmjRVKkUqK7mY3kxSUGxZiTt8OYrLUmUeyCKW oAUQJOGgCStHEvZfAYU02YzPBy26VHk1lJviLsUEA6XAPswuM4D05Dp+mfBQIJ+XfaX7 1QujcskdwR8bLgjh1lu+JhFW63rvse6gXWwLjCkbo7iYN74NDaNkhM7Lie2u9dYw/jOj lG4gRmfBXpDCu/JYCZVyYhoUuPl+LjI+gbL/TJWPjqP7lOVKGKftL3cndCs5fGFAXuyW WjyPGFoN1jOK6Qkr4qiCPERBiYVC40Ip8Nxzn7FSB+NU4/YlO4xATw65tFgRiSl2He4f PI5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681514852; x=1684106852; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1VJscWXDoV805lUglV9Lx4G4y0RL2KQ+qVhOtcQn4So=; b=ZCykG09M0LrvYGXhlS0z4jCSepMRce2qiToTBNDA/wpJV5gPnRvbnT/nBW1e+XuCP2 XmiPDoPRe2QmUURwCD9up9BVhsdeCBA0+VJuNsCrfUI5tE5ATU7n7Me/QaiTHpvDQkx5 da5nMcb0rdsd4fph9IRk359LZmsChLrimYVIihe3AKMK34r+FUnitWoSvcDOzoPa264d a0muI/QL07qM26u0y3iLypcexRzZvdn0ybqpXuoOUZFgN9J7iE0qEBXsoIFIRvfK2jl0 1HuMUtdhOBdCzQDvTqDxn2TS6P50aVnveHNsxy5a5CFLbKGSn/SgushKokso6uy+7p0m r03A== X-Gm-Message-State: AAQBX9dC0TGYdsc2A7/pOAsWbwKke6OyzhkmMSEbF3BWCoh9ohkvFzxL tgYFweDgv9o9BfHd3/cL2TpPBS2MECY= X-Received: by 2002:adf:e506:0:b0:2ef:9837:6b2b with SMTP id j6-20020adfe506000000b002ef98376b2bmr255643wrm.21.1681514851612; Fri, 14 Apr 2023 16:27:31 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id s9-20020a5d6a89000000b002cf1c435afcsm4550694wru.11.2023.04.14.16.27.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 16:27:30 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , iommu@lists.linux.dev Cc: Matthew Wilcox , David Hildenbrand , kvm@vger.kernel.org, Jason Gunthorpe , Kevin Tian , Joerg Roedel , Will Deacon , Robin Murphy , Alex Williamson , Lorenzo Stoakes Subject: [PATCH 2/7] mm/gup: remove unused vmas parameter from pin_user_pages_remote() Date: Sat, 15 Apr 2023 00:27:23 +0100 Message-Id: X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763196566014233423?= X-GMAIL-MSGID: =?utf-8?q?1763196566014233423?= No invocation of pin_user_pages_remote() uses the vmas parameter, so remove it. This forms part of a larger patch set eliminating the use of the vmas parameters altogether. Signed-off-by: Lorenzo Stoakes Reviewed-by: Jason Gunthorpe --- drivers/iommu/iommufd/pages.c | 4 ++-- drivers/vfio/vfio_iommu_type1.c | 2 +- include/linux/mm.h | 2 +- mm/gup.c | 8 +++----- mm/process_vm_access.c | 2 +- 5 files changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c index f8d92c9bb65b..9d55a2188a64 100644 --- a/drivers/iommu/iommufd/pages.c +++ b/drivers/iommu/iommufd/pages.c @@ -786,7 +786,7 @@ static int pfn_reader_user_pin(struct pfn_reader_user *user, user->locked = 1; } rc = pin_user_pages_remote(pages->source_mm, uptr, npages, - user->gup_flags, user->upages, NULL, + user->gup_flags, user->upages, &user->locked); } if (rc <= 0) { @@ -1787,7 +1787,7 @@ static int iopt_pages_rw_page(struct iopt_pages *pages, unsigned long index, rc = pin_user_pages_remote( pages->source_mm, (uintptr_t)(pages->uptr + index * PAGE_SIZE), 1, (flags & IOMMUFD_ACCESS_RW_WRITE) ? FOLL_WRITE : 0, &page, - NULL, NULL); + NULL); mmap_read_unlock(pages->source_mm); if (rc != 1) { if (WARN_ON(rc >= 0)) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 493c31de0edb..e6dc8fec3ed5 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -562,7 +562,7 @@ static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr, mmap_read_lock(mm); ret = pin_user_pages_remote(mm, vaddr, npages, flags | FOLL_LONGTERM, - pages, NULL, NULL); + pages, NULL); if (ret > 0) { int i; diff --git a/include/linux/mm.h b/include/linux/mm.h index faeed36c2d04..513d5fab02f1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2378,7 +2378,7 @@ long get_user_pages_remote(struct mm_struct *mm, long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked); + int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, diff --git a/mm/gup.c b/mm/gup.c index 7e454d6b157e..931c805bc32b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3093,8 +3093,6 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); * @gup_flags: flags modifying lookup behaviour * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: pointer to lock flag indicating whether lock is held and * subsequently whether VM_FAULT_RETRY functionality can be * utilised. Lock must initially be held. @@ -3109,14 +3107,14 @@ EXPORT_SYMBOL_GPL(pin_user_pages_fast); long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { int local_locked = 1; - if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) return 0; - return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, + return __gup_longterm_locked(mm, start, nr_pages, pages, NULL, locked ? locked : &local_locked, gup_flags); } diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 78dfaf9e8990..0523edab03a6 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -104,7 +104,7 @@ static int process_vm_rw_single_vec(unsigned long addr, mmap_read_lock(mm); pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages, flags, process_pages, - NULL, &locked); + &locked); if (locked) mmap_read_unlock(mm); if (pinned_pages <= 0) From patchwork Fri Apr 14 23:27:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83578 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp722045vqo; Fri, 14 Apr 2023 16:29:10 -0700 (PDT) X-Google-Smtp-Source: AKy350Zv0++yE14S3CoxlxcGACMg/CxlMc+UdFav/FmHPMZDKfBhu3Je3c0nnEsYYVoTAlbDS+bN X-Received: by 2002:a17:90b:952:b0:246:c3e1:c931 with SMTP id dw18-20020a17090b095200b00246c3e1c931mr6481598pjb.23.1681514950475; Fri, 14 Apr 2023 16:29:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681514950; cv=none; d=google.com; s=arc-20160816; b=fJJ8iOXQmVMNX9njg8+/0809nt/7sZ59zilJNwZbPhqgUnex6h43Ou2SuoK6GFgTWk R5pHUJq8DOi/2EY/yW+OpgPQ6DZ6nmyNToGO1PvDTsLxYPEbOqLBxnUFfovRqh2G1HAE Lo65CN3cuRSAUc1A3sflX4ABenMD/TZxfeMrhzKqk9KOJ6oRth28+P99xeVQ5lB8yNmC g/jma4JgfpgoCIU9ibspTLA1tgapHDflor4dleVIiwiQoFWOLbgFbpdNpz+lNJOa7FS+ JLSeReX7vG8+zYG7PzE39/6KoTH0K5GU9qv7/izzb9OOKK6IiVkm86Tic6AYqnd1L3fr R21Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=a6w7qsUrIZVMYyezihKSOSDmFZP615zixrYx92E8QQo=; b=0XOftEYgpLI+AzMkdRSK9JKKnW7PPmGQ8dDmRqD8ncpYW2JHvg4pwNQ9y9hAho89EA Io3+B6OJysoDxaBjQoGABMaADZ5yRgdOrFHXH3ZGHbcSS+SzjLeqCU2+md4hgdPaXsVI 0TDWrHi7mj87+tFjDEviWAZJNNXyjWqtQA4q22KaLRbUEyxrTGZo4QywgpwIDlO/ZgLB mkfWDycXHl832svMnXsLb561IiWcexzPGEq5i9+K3nWjUgeoZYTAGxDmWGJOC8o8j0qB KNz/BYIr1qf2h96JN84sPerNSkwvoXWMTESaIst9fXPVzSyVbqocIQZd7oGV0ArZbsqn YpVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=jKu8YD2m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s32-20020a17090a2f2300b00240d7509eb8si7111088pjd.114.2023.04.14.16.28.58; Fri, 14 Apr 2023 16:29:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=jKu8YD2m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230084AbjDNX1x (ORCPT + 99 others); Fri, 14 Apr 2023 19:27:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbjDNX1v (ORCPT ); Fri, 14 Apr 2023 19:27:51 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DF2D4ECB; Fri, 14 Apr 2023 16:27:42 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3f0915c64a3so1143475e9.2; Fri, 14 Apr 2023 16:27:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681514860; x=1684106860; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a6w7qsUrIZVMYyezihKSOSDmFZP615zixrYx92E8QQo=; b=jKu8YD2mb2mdFw5P+t9eP3xWbWMdAKdVk6UFDj7Pls/BFK24F/zY5RkOEpV+Z/4SMO ATpkxXP+1gfJB2eLN/vM1CK2ktHBQ9PWdaPUeBhVqf9zkFuflSxYmofBTCGc1j1XyZbD VDoXzMW5DkvNSEQ35rLWUduulDQ5w8VOrz8xGtIuXLP96Lyb5Ax8SzSA9RE1D/QBOqPJ DAECQmJyaS6I/0ayg2Qf2pJxAMl3+REQevWrGHXtfsVqR90u9W68E02qQTWHTHWiTi2t 0KWbirSzAXIMO9phmRLGd7Xrf1nX0hie2axqdFRrfShkkWvWpCkQqPWYgLJivic7D+ox osBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681514860; x=1684106860; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a6w7qsUrIZVMYyezihKSOSDmFZP615zixrYx92E8QQo=; b=duVFLvtlv0aDwd3apiPbALU+37kTjfzQFEe6+OZgwcbfufgAkrhaaXjExnbkbHQ+cf 2o+Csq3ATxMioj+eXpPuqUjM2gZWEKyLLoYraLfHlkU9QsqFHkPkEHgbdftWKMABzQnp YJhSuA53I+vGtw5XsAkWBPLbtBd7s6rGRCPU/Xwu2TX1pR+5c51GPW5WOsHnRN6r8bNE pfwTZuVNVzxWqQMZENrqivSSzDsH5fH/C7N/w/sdUXc/3UL30eKe+/ReQvEPq36t8PXW 5ibo40Ke9dC9TXIFXYw5TubPiSgwDs65R7E1jXp4en3CiwBIz3nD4ZUczW9q8klOXjFu Ti4Q== X-Gm-Message-State: AAQBX9doK3SISBF2jyk5EFk7adqcBjQFu2DiBSDZVZ9IRbIx6jwxwMQE UpEnQTJ3iHwiWkbPcD7jShs= X-Received: by 2002:adf:df04:0:b0:2c7:df22:1184 with SMTP id y4-20020adfdf04000000b002c7df221184mr243611wrl.56.1681514860186; Fri, 14 Apr 2023 16:27:40 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id 8-20020a05600c22c800b003ef71d541cbsm5370017wmg.1.2023.04.14.16.27.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 16:27:39 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, Catalin Marinas , Will Deacon , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Eric Biederman , Kees Cook , Alexander Viro , Christian Brauner , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Kentaro Takeda , Tetsuo Handa , Paul Moore , James Morris , "Serge E . Hallyn" , Paolo Bonzini , Lorenzo Stoakes Subject: [PATCH 3/7] mm/gup: remove vmas parameter from get_user_pages_remote() Date: Sat, 15 Apr 2023 00:27:31 +0100 Message-Id: <5a4cf1ebf1c6cdfabbf2f5209facb0180dd20006.1681508038.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763196220677971264?= X-GMAIL-MSGID: =?utf-8?q?1763196220677971264?= The only instances of get_user_pages_remote() invocations which used the vmas parameter were for a single page which can instead simply look up the VMA directly. In particular:- - __update_ref_ctr() looked up the VMA but did nothing with it so we simply remove it. - __access_remote_vm() was already using vma_lookup() when the original lookup failed so by doing the lookup directly this also de-duplicates the code. This forms part of a broader set of patches intended to eliminate the vmas parameter altogether. Signed-off-by: Lorenzo Stoakes --- arch/arm64/kernel/mte.c | 5 +++-- arch/s390/kvm/interrupt.c | 2 +- fs/exec.c | 2 +- include/linux/mm.h | 2 +- kernel/events/uprobes.c | 10 +++++----- mm/gup.c | 12 ++++-------- mm/memory.c | 9 +++++---- mm/rmap.c | 2 +- security/tomoyo/domain.c | 2 +- virt/kvm/async_pf.c | 3 +-- 10 files changed, 23 insertions(+), 26 deletions(-) diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index f5bcb0dc6267..74d8d4007dec 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -437,8 +437,9 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, struct page *page = NULL; ret = get_user_pages_remote(mm, addr, 1, gup_flags, &page, - &vma, NULL); - if (ret <= 0) + NULL); + vma = vma_lookup(mm, addr); + if (ret <= 0 || !vma) break; /* diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index 9250fde1f97d..c19d0cb7d2f2 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -2777,7 +2777,7 @@ static struct page *get_map_page(struct kvm *kvm, u64 uaddr) mmap_read_lock(kvm->mm); get_user_pages_remote(kvm->mm, uaddr, 1, FOLL_WRITE, - &page, NULL, NULL); + &page, NULL); mmap_read_unlock(kvm->mm); return page; } diff --git a/fs/exec.c b/fs/exec.c index 87cf3a2f0e9a..d8d48ee15aac 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -219,7 +219,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos, */ mmap_read_lock(bprm->mm); ret = get_user_pages_remote(bprm->mm, pos, 1, gup_flags, - &page, NULL, NULL); + &page, NULL); mmap_read_unlock(bprm->mm); if (ret <= 0) return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 513d5fab02f1..8dfa236cfb58 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2374,7 +2374,7 @@ extern int __access_remote_vm(struct mm_struct *mm, unsigned long addr, long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked); + int *locked); long pin_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 59887c69d54c..35e8a7ec884c 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -365,7 +365,6 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) { void *kaddr; struct page *page; - struct vm_area_struct *vma; int ret; short *ptr; @@ -373,7 +372,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, short d) return -EINVAL; ret = get_user_pages_remote(mm, vaddr, 1, - FOLL_WRITE, &page, &vma, NULL); + FOLL_WRITE, &page, NULL); if (unlikely(ret <= 0)) { /* * We are asking for 1 page. If get_user_pages_remote() fails, @@ -475,8 +474,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, - &old_page, &vma, NULL); - if (ret <= 0) + &old_page, NULL); + vma = vma_lookup(mm, vaddr); + if (ret <= 0 || !vma) return ret; ret = verify_opcode(old_page, vaddr, &opcode); @@ -2028,7 +2028,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) * essentially a kernel access to the memory. */ result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, - NULL, NULL); + NULL); if (result < 0) return result; diff --git a/mm/gup.c b/mm/gup.c index 931c805bc32b..9440aa54c741 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2165,8 +2165,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: pointer to lock flag indicating whether lock is held and * subsequently whether VM_FAULT_RETRY functionality can be * utilised. Lock must initially be held. @@ -2181,8 +2179,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, * * The caller is responsible for releasing returned @pages, via put_page(). * - * @vmas are valid only as long as mmap_lock is held. - * * Must be called with mmap_lock held for read or write. * * get_user_pages_remote walks a process's page tables and takes a reference @@ -2219,15 +2215,15 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { int local_locked = 1; - if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; - return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, NULL, locked ? locked : &local_locked, gup_flags); } @@ -2237,7 +2233,7 @@ EXPORT_SYMBOL(get_user_pages_remote); long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { return 0; } diff --git a/mm/memory.c b/mm/memory.c index ea8fdca35df3..43426147f9f7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5596,7 +5596,11 @@ int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, struct page *page = NULL; ret = get_user_pages_remote(mm, addr, 1, - gup_flags, &page, &vma, NULL); + gup_flags, &page, NULL); + vma = vma_lookup(mm, addr); + if (!vma) + break; + if (ret <= 0) { #ifndef CONFIG_HAVE_IOREMAP_PROT break; @@ -5605,9 +5609,6 @@ int __access_remote_vm(struct mm_struct *mm, unsigned long addr, void *buf, * Check if this is a VM_IO | VM_PFNMAP VMA, which * we can access using slightly different code. */ - vma = vma_lookup(mm, addr); - if (!vma) - break; if (vma->vm_ops && vma->vm_ops->access) ret = vma->vm_ops->access(vma, addr, buf, len, write); diff --git a/mm/rmap.c b/mm/rmap.c index ba901c416785..756ea8a9bb90 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2324,7 +2324,7 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, npages = get_user_pages_remote(mm, start, npages, FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD, - pages, NULL, NULL); + pages, NULL); if (npages < 0) return npages; diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 31af29f669d2..ac20c0bdff9d 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -916,7 +916,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned long pos, */ mmap_read_lock(bprm->mm); ret = get_user_pages_remote(bprm->mm, pos, 1, - FOLL_FORCE, &page, NULL, NULL); + FOLL_FORCE, &page, NULL); mmap_read_unlock(bprm->mm); if (ret <= 0) return false; diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 9bfe1d6f6529..e033c79d528e 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -61,8 +61,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ mmap_read_lock(mm); - get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, - &locked); + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked); if (locked) mmap_read_unlock(mm); From patchwork Fri Apr 14 23:27:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83581 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp723414vqo; Fri, 14 Apr 2023 16:32:51 -0700 (PDT) X-Google-Smtp-Source: AKy350Yo/ltOBg+ROERlQhS5T7XCS7tR22y5vD3I2a1GFjcnt1sGafWxkfLaC8fTw2RynskDfkL6 X-Received: by 2002:a05:6a21:8688:b0:da:897b:ae40 with SMTP id ox8-20020a056a21868800b000da897bae40mr6397141pzb.37.1681515171380; Fri, 14 Apr 2023 16:32:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681515171; cv=none; d=google.com; s=arc-20160816; b=Qucaq4rtk88MstLP6wAOJ1ltP7JWZzbhe6f+Atp2nMmUf12c4K1t8nWiQa5FOZP158 ro7luEQM0JqkCB0I3ror29Prfyfffc1Siw0S59u/dxloJAv+lOnMkm7tGTnJRigsw9W8 BcoN+l8M4K+fzRrz1CudAfBl0N3ZbRkw/T6DY7LLrKV35SVUbmM+8/Vgu1lByld2n+F6 8IRUBMp1/sMKCf0P8ditSyVWaqSZaBtkF70x8SHXGzP7bfsE/l5dzyaNbkdRIjzO3PqH Fn2kguca7qFL5gmVMqRJN7zo95j6HMc/wZNMokp2HaHdJUybWCeYPzzfY3QyZpT+sfNx qo0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=N3KXMf3LCMhF+M2dvABgEBrAeCcDHD/N5/gKopMor18=; b=kHarZ2sNMYC4PtACceHHy3vQxgxFPVbn0JdmeqMhkWojRGurDRlaK3zbi3wSW6Y2Zv 6H75cSWp8Wxp0WfoBe/O/bXFZL+fVjddZPHjiyzQH4odHMxouauGkABc3zmTE074xBrQ 6Kam7cq6+jfx4qWC+tfrB1EDpHNSWwwJHW7uNcYsux37Por/jsABNvJoRvA3ousfkaJS K9mKnYZ4S9psE4N1+JA21gxaAcvK6M1BehXVqdlMEPm9nOAY5d9utAzyD6VJ/RedWt28 7KSTbaaUV/OM+L2I3rdRgdRr+qFRDvrAxIi0xZT3WrnL6hq2Kab6FLB3VD7XYEeUee6m rvlA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=rA3A5jDm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a21-20020a63e855000000b0051b5256a3fdsi5315225pgk.305.2023.04.14.16.32.34; Fri, 14 Apr 2023 16:32:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=rA3A5jDm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229965AbjDNX2P (ORCPT + 99 others); Fri, 14 Apr 2023 19:28:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230081AbjDNX2N (ORCPT ); Fri, 14 Apr 2023 19:28:13 -0400 Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B405B93CC for ; Fri, 14 Apr 2023 16:27:47 -0700 (PDT) Received: by mail-wr1-x42c.google.com with SMTP id ffacd0b85a97d-2f4c431f69cso68577f8f.0 for ; Fri, 14 Apr 2023 16:27:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681514866; x=1684106866; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N3KXMf3LCMhF+M2dvABgEBrAeCcDHD/N5/gKopMor18=; b=rA3A5jDmQUSuq498w/mMrmtN4dO3es8hQUHoowqJXmDtxEcp9n7bDFLLpduCRUkl8x TqOjwaAdQIfmjlEEi9FlTM9Z0lhqKtz9w7fjO//BoEs82BYK6Is80o76ACdjNB57IkYV X8zrkWdz5H4UMELq6qYVBk2YUTzOvS49IhM3himuygLL1kmBZi+oqfYwluBdMmbaPZN0 OYiWM4wK0tkuxjOn7Z3BiXVSz6SuSwg+c5gVUDvm6QG58cJiMXMr+jxegxGHQfX/GJBq xfRs+2lywTFaKpIvqgwaVTtkYgHAd7b30FX6xbqy+CibstKe6PGASFwoYgVAq7zI2aqW oKdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681514866; x=1684106866; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N3KXMf3LCMhF+M2dvABgEBrAeCcDHD/N5/gKopMor18=; b=UkaL0S0KT+gYyRYuMNPvToUzOviOTEEN+F21Ncz4+ONtvtk81PgRpFGj8BXdrCBefM IDfUCdRUr1kYz05qJuvBZ/SDGLpaDbx2ijXWoYG/xCqaBS3jDBOF+QTrlyWCcz85IDOs 9UtY4lKmAB3GzljM2y/p7yTjW266y/Ywx2ON4wj+jD0pIJzQ1ax8/RSZe3+2AxW0LNqF dt/xh3KIVx/vM/LgP66t2IxpYupH9VyBnRyNodKMp2FZgRs/6VTJHUhc+BMv+oxa/t/9 A4S1VPtRHR+NhDR8MF6SoMLO7GAfIQ2JmTDHVidy9sdrGlfowkkMiNUs72iRYBoX8v98 vKfg== X-Gm-Message-State: AAQBX9f21OS0O7xnZHBL18db0eMDdbRCCGqpZwoEVoEbqWl8o2TbDPWr 76pDkXZQhBqroLwD7yG9V5M= X-Received: by 2002:a5d:5447:0:b0:2f0:2d0a:1901 with SMTP id w7-20020a5d5447000000b002f02d0a1901mr247777wrv.45.1681514865812; Fri, 14 Apr 2023 16:27:45 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id u6-20020a5d6ac6000000b002f2789d1bcfsm4546827wrw.21.2023.04.14.16.27.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 16:27:45 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Lorenzo Stoakes Subject: [PATCH 4/7] mm/gup: introduce the FOLL_SAME_FILE GUP flag Date: Sat, 15 Apr 2023 00:27:40 +0100 Message-Id: <7ed66bd5243f7535030e0fa6a8a94b76dc5033f1.1681508038.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763196452776929803?= X-GMAIL-MSGID: =?utf-8?q?1763196452776929803?= This flag causes GUP to assert that all VMAs within the input range possess the same vma->vm_file. If not, the operation fails. This is part of a patch series which eliminates the vmas parameter from the GUP API, implementing the one remaining assertion within the entire kernel that requires access to the VMAs associated with a GUP range. Signed-off-by: Lorenzo Stoakes --- include/linux/mm_types.h | 2 ++ mm/gup.c | 16 ++++++++++++---- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3fc9e680f174..84d1aec9dbab 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1185,6 +1185,8 @@ enum { FOLL_PCI_P2PDMA = 1 << 10, /* allow interrupts from generic signals */ FOLL_INTERRUPTIBLE = 1 << 11, + /* assert that the range spans VMAs with the same vma->vm_file */ + FOLL_SAME_FILE = 1 << 12, /* See also internal only FOLL flags in mm/internal.h */ }; diff --git a/mm/gup.c b/mm/gup.c index 9440aa54c741..3954ce499a4a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -959,7 +959,8 @@ static int faultin_page(struct vm_area_struct *vma, return 0; } -static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) +static int check_vma_flags(struct vm_area_struct *vma, struct file *file, + unsigned long gup_flags) { vm_flags_t vm_flags = vma->vm_flags; int write = (gup_flags & FOLL_WRITE); @@ -968,7 +969,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if (vm_flags & (VM_IO | VM_PFNMAP)) return -EFAULT; - if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) + if ((gup_flags & FOLL_ANON) && !vma_is_anonymous(vma)) return -EFAULT; if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) @@ -977,6 +978,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if (vma_is_secretmem(vma)) return -EFAULT; + if ((gup_flags & FOLL_SAME_FILE) && vma->vm_file != file) + return -EFAULT; + if (write) { if (!(vm_flags & VM_WRITE)) { if (!(gup_flags & FOLL_FORCE)) @@ -1081,6 +1085,7 @@ static long __get_user_pages(struct mm_struct *mm, long ret = 0, i = 0; struct vm_area_struct *vma = NULL; struct follow_page_context ctx = { NULL }; + struct file *file = NULL; if (!nr_pages) return 0; @@ -1111,10 +1116,13 @@ static long __get_user_pages(struct mm_struct *mm, ret = -EFAULT; goto out; } - ret = check_vma_flags(vma, gup_flags); + ret = check_vma_flags(vma, i == 0 ? vma->vm_file : file, + gup_flags); if (ret) goto out; + file = vma->vm_file; + if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, @@ -1595,7 +1603,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, * We want to report -EINVAL instead of -EFAULT for any permission * problems or incompatible mappings. */ - if (check_vma_flags(vma, gup_flags)) + if (check_vma_flags(vma, vma->vm_file, gup_flags)) return -EINVAL; ret = __get_user_pages(mm, start, nr_pages, gup_flags, From patchwork Fri Apr 14 23:27:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83579 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp722093vqo; Fri, 14 Apr 2023 16:29:21 -0700 (PDT) X-Google-Smtp-Source: AKy350bb6WQlwe4hS4hsoVbbYGyTiyFgbpBBw1mAzmKgTh0CwYkL+faKGUJUFcJO2NGEk6/7xs2f X-Received: by 2002:a17:903:120c:b0:1a3:d392:2f29 with SMTP id l12-20020a170903120c00b001a3d3922f29mr6284056plh.20.1681514961460; Fri, 14 Apr 2023 16:29:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681514961; cv=none; d=google.com; s=arc-20160816; b=EsjNnn1o/B0KU2Yt3e4gqmXlfHM/+nSgMHiJGBtISGwsCVXINAjma68FqDgQmUTusR sQpiCOOq8MZ9eZv5hOn66zylJHyrhLGwuAUdtXV4xH8dtHo0mnAoBQtWlazlaCjpG5CL nGckvgPQpbrSZfzrcbZnugMioJyFrJ0//9MrvJuM1ngfiPbMdav1Wm2OZlDOWp/sJ+DH IP8CtzqS8T7SZLiUQH3dUH2w9ZaNrbjsaS2646NpkOVjynwidVFE923lfWgLQcqDpHhE 2GdDhsiCG546lU8XvuoNIwAwp9G5MH8mYJDdM+kOMQKihGQPy+k/ssRZxCkmOHonw8dR BfCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8GS8ed+C9TZSDKqE7yJs3rXmG1XU+tjMZxFV1EPCP0I=; b=W6zsE0axZdUjbf6lDf8B0W7nqwHTJMa7gWbPPtGhziBlLTIS68kyPhFTVLWRWJIs8N FZUcAyAT7TlrFhE1z/Fy55vltahayA+rQxZH7hA+oh2PqCWK1ghIctUzOXTISg4+Ksuy MfjiOFHedq2KTnXS2xxPWsqsZYk1iGRkMI28k3TgXJNi2ga+MEFMlQJuwT/hcS/Lw0uD AWbGOLo3HKKNLNvNZAZhSI8tS3UTjauKxMR1H3IPAYkQQ5mCsKSPYlnswrRzfK8xDhma TMqhELUEKhMIfXpJzYX6Igo0g69DWahzL0wxbJm9aP4ds4VtD5TyRdfpxrV/uKK37RnN sTtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=PlJksDxU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m22-20020a170902bb9600b0019ca3bea310si5397018pls.303.2023.04.14.16.29.09; Fri, 14 Apr 2023 16:29:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=PlJksDxU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230081AbjDNX23 (ORCPT + 99 others); Fri, 14 Apr 2023 19:28:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229791AbjDNX21 (ORCPT ); Fri, 14 Apr 2023 19:28:27 -0400 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97455A5E6; Fri, 14 Apr 2023 16:27:55 -0700 (PDT) Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-3f169350e6cso19505e9.1; Fri, 14 Apr 2023 16:27:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681514869; x=1684106869; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8GS8ed+C9TZSDKqE7yJs3rXmG1XU+tjMZxFV1EPCP0I=; b=PlJksDxUb5hQplwDvFaxMJP0+U8KcfMstaXF/Ydg3scwoxrJw6uTWy//QIrOU+Z1ZU uvZBfEzsO8dwMd0hFEXuz6FbNobdiojzzqmx4wCRgvt3WqO/jAM96chSioBfCyimq9MW 1Plxl2L+tI875o1BfeLrCMsG88RovrZ4W6Y0pt7lLykXy5oPJHYI0FiMII0YdaW9Qd3t /tMILivNmVgHap7jcjKg8GN4D91a+P8BtTebvVb4rOcwy+zaWtlwiLIv/vg/hHU9Z1fZ Sp+fQ0SXbMDe9NEBbWWDyCjalOKIMy3b/96mq8y7D42eveTU+xCmbnFChq+NnsB8FVXa B1uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681514869; x=1684106869; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8GS8ed+C9TZSDKqE7yJs3rXmG1XU+tjMZxFV1EPCP0I=; b=XOD1dEsaB+t9EYPFvC9ktnHlThAhJgbNd4txoqjvfK+bLDlAu/iscHbx5rQwEIAe2P NTYLAn+lp4N2fwAxJrxDlIIAYL/In63Qjncx5Cwi0YXxsXUabm+Sy0IScMyDFJdQKE8R WzSm01RyjXq7Z/ilAocRDzE9AzqtJ4QOmddtYToG61H1kOXkj9lN5HVDfgwCbZFOhPWc quj3A2NWJ2miiKIptxkTG4ZNdceappO8opv5a1PXPwc98gz6dOAO+dZV5ck+JUAm3bvl h03P+mnVkmYTOBLp/yPFW9RQzEkuEpnlJEoamnyAHSaJFXjj+/GbOrqqKD9eeDNZWkEj EOdQ== X-Gm-Message-State: AAQBX9drEsoik5p1WjQO4Wee0z3tshfOz4F7iT9MhBsMeQbxTRx+5LPd oahWIwACqg9oPaG6lVLW1io= X-Received: by 2002:a5d:6107:0:b0:2ef:b2fc:7e8f with SMTP id v7-20020a5d6107000000b002efb2fc7e8fmr221640wrt.42.1681514869607; Fri, 14 Apr 2023 16:27:49 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id c11-20020a7bc84b000000b003ed2384566fsm5348810wml.21.2023.04.14.16.27.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 16:27:48 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Jens Axboe , Pavel Begunkov , io-uring@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH 5/7] io_uring: rsrc: use FOLL_SAME_FILE on pin_user_pages() Date: Sat, 15 Apr 2023 00:27:45 +0100 Message-Id: <17357dec04b32593b71e4fdf3c30a346020acf98.1681508038.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763196232396500694?= X-GMAIL-MSGID: =?utf-8?q?1763196232396500694?= Commit edd478269640 ("io_uring/rsrc: disallow multi-source reg buffers") prevents io_pin_pages() from pinning pages spanning multiple VMAs with permitted characteristics (anon/huge), requiring that all VMAs share the same vm_file. The newly introduced FOLL_SAME_FILE flag permits this to be expressed as a GUP flag rather than having to retrieve VMAs to perform the check. We then only need to perform a VMA lookup for the first VMA to assert the anon/hugepage requirement as we know the rest of the VMAs will possess the same characteristics. Doing this eliminates the one instance of vmas being used by pin_user_pages(). Signed-off-by: Lorenzo Stoakes Suggested-by: Matthew Wilcox (Oracle) --- io_uring/rsrc.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 7a43aed8e395..adc860bcbd4f 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1141,9 +1141,8 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages, struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) { unsigned long start, end, nr_pages; - struct vm_area_struct **vmas = NULL; struct page **pages = NULL; - int i, pret, ret = -ENOMEM; + int pret, ret = -ENOMEM; end = (ubuf + len + PAGE_SIZE - 1) >> PAGE_SHIFT; start = ubuf >> PAGE_SHIFT; @@ -1153,31 +1152,26 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) if (!pages) goto done; - vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *), - GFP_KERNEL); - if (!vmas) - goto done; - ret = 0; mmap_read_lock(current->mm); - pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM, - pages, vmas); + + pret = pin_user_pages(ubuf, nr_pages, + FOLL_WRITE | FOLL_LONGTERM | FOLL_SAME_FILE, + pages, NULL); if (pret == nr_pages) { - struct file *file = vmas[0]->vm_file; + /* + * lookup the first VMA, we require that all VMAs in range + * maintain the same file characteristics, as enforced by + * FOLL_SAME_FILE + */ + struct vm_area_struct *vma = vma_lookup(current->mm, ubuf); + struct file *file; /* don't support file backed memory */ - for (i = 0; i < nr_pages; i++) { - if (vmas[i]->vm_file != file) { - ret = -EINVAL; - break; - } - if (!file) - continue; - if (!vma_is_shmem(vmas[i]) && !is_file_hugepages(file)) { - ret = -EOPNOTSUPP; - break; - } - } + file = vma->vm_file; + if (file && !vma_is_shmem(vma) && !is_file_hugepages(file)) + ret = -EOPNOTSUPP; + *npages = nr_pages; } else { ret = pret < 0 ? pret : -EFAULT; @@ -1194,7 +1188,6 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) } ret = 0; done: - kvfree(vmas); if (ret < 0) { kvfree(pages); pages = ERR_PTR(ret); From patchwork Fri Apr 14 23:27:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83583 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp724350vqo; Fri, 14 Apr 2023 16:35:24 -0700 (PDT) X-Google-Smtp-Source: AKy350bNZiwL3CGnO9m7C7O7qrozV4KyVEjbYZsVAC3Jmk6OlLrkLYd/Xr6oepaWbyVmgMxmEgvm X-Received: by 2002:a17:902:d488:b0:1a2:37fc:b5e2 with SMTP id c8-20020a170902d48800b001a237fcb5e2mr5744953plg.7.1681515324421; Fri, 14 Apr 2023 16:35:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681515324; cv=none; d=google.com; s=arc-20160816; b=EnXOOMjVIB8d4AVz+FZYgn1XFOPrejo5xnuR0S2B0kxG2b6cSW43URoihkGWj7vhZg YGbq8n0PNCnax8gaF26ZdHkD1YAJ65ED1rGk3+z9sUTmiMwL/ZwtMD238X3SMC74Lvs2 VsiqCh9KoaZPg9rujfUR1ypUIriBpapICdm/sG+aQLWc3HtMCHtyakWIYs7KRDfxJyGR E4minKxslUeiLZovZ0hZ7P5xvNewyrqh4+cgRktjKFtu7DuiaHrS5qVo0mtpqr6KuOdG Pp3j1DY92zcwJRKFYVrzeaNfDpjcBw1cREiH/dGXZiUicsuXh4ZLbZ+h4M8hSf+Tc45F KCQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Ftq6TTcnXmE4j3aIu+06i4fKocrqUKSD4QQo68fNkt8=; b=BUy9x/EewMIiHY3AEjhjyZLNhYDu2xrJQfDBOdrdLDse86IFDNQtBsbrBUsrS1bQea uE6em7kmR5uX6n/X+x2fvQiWlBrgxvzffsYMPoMDE2SyeTEw9E/l2BE6QV54tMW5BXVP I+XkgxvgMvnv58LDBP7MR6B4cGXxM6AQ6/4ZYC8DWm1iKkKe7MbkOyrf9MrEwgPfvjfD E/XB02F1QFfob7SA1nPBaGiNeVa/TMlrop4ZT70J9rB429XmU83EqjQOZFeqBfTs6Wq3 VBZOZMffx5Wmd6ABaAtIrlgi+iNYEON4n00/Oh3gQ7uRbad110yXLiq5OFp4AsiNvH7P xKkg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=ohd1+UCb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m22-20020a170902bb9600b0019ca3bea310si5397000pls.303.2023.04.14.16.35.12; Fri, 14 Apr 2023 16:35:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=ohd1+UCb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230135AbjDNX2b (ORCPT + 99 others); Fri, 14 Apr 2023 19:28:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229491AbjDNX23 (ORCPT ); Fri, 14 Apr 2023 19:28:29 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CDD19ED5; Fri, 14 Apr 2023 16:27:56 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id l10-20020a05600c1d0a00b003f04bd3691eso21448782wms.5; Fri, 14 Apr 2023 16:27:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681514873; x=1684106873; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ftq6TTcnXmE4j3aIu+06i4fKocrqUKSD4QQo68fNkt8=; b=ohd1+UCbWmxvM4MzYWo9I9cjeFbuOl76YkSJ4H1bBFMabAoAOh9PP3Xn3F4/WBuFLE 44aN9PSGmSuG7eqNDfNvN1tuP6TlgtU1eO6O4TQ3ntS2xwnbSqXZuzvWyKhZK6MbRP+6 rSYx1oJ6hL09OWI1C8flq0T9yyN+BfcDW5h8OYenbAi9oU9AE2yE+xQpgMPr10v3G8Aw /wdUnEwSsTFYCQstfHfX6RgCpSbJiS/mcHorC1MFCP0huO0EtgtALhIqad5Blxade8UR ugQfPqV5ZS3RS5cYDhvRxP2urKRR9NIkiFqJFqyXuoEcF+72M4833WQ7ukV49X69L8WH 5oMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681514873; x=1684106873; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ftq6TTcnXmE4j3aIu+06i4fKocrqUKSD4QQo68fNkt8=; b=kWBXSgRspAtcJ8D34+8aE1NjcOJ99gmuzeBjpwoOAvPHlGLAhFaJzc585gsqXHlDZr k5ZinMFqz8QjSF4nSaS7MIDpgnZp1hzzvulNHNGrq8jbanYVq+mKb2pKn10lKUdFRl4u RdMnlLYWgUIqu+6wTKpm+MRTF7UuVMe5/KtCqhim93t6OmiJ26ApLzEt7CDoSbOdSDoc 5JB0d4+JAx/oQ78/ULExVfCz2/dxTClbAB4dU/kz8x/ltogQVopKwK+FXFn9DK5FVjf5 wznv/fnY1agulrfZo6KGP4DS0p5Buk54dGwvBA3VSroEU6tirvr9jc45YQKDpPpiqDc0 on4Q== X-Gm-Message-State: AAQBX9e4zqGf9c5fQFs4g4o9UUErhyYk413bv+GtHVTSdlb/L9UmhC6l CglIww91J2AnKYcsrI58SzpQapw11El6lA== X-Received: by 2002:a05:600c:21cd:b0:3f1:662a:93d0 with SMTP id x13-20020a05600c21cd00b003f1662a93d0mr454115wmj.15.1681514872738; Fri, 14 Apr 2023 16:27:52 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id n19-20020a1c7213000000b003ee58e8c971sm5303734wmc.14.2023.04.14.16.27.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 16:27:51 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Dennis Dalessandro , Jason Gunthorpe , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Mauro Carvalho Chehab , "Michael S . Tsirkin" , Jason Wang , Jens Axboe , Pavel Begunkov , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linuxppc-dev@lists.ozlabs.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, netdev@vger.kernel.org, io-uring@vger.kernel.org, bpf@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH 6/7] mm/gup: remove vmas parameter from pin_user_pages() Date: Sat, 15 Apr 2023 00:27:49 +0100 Message-Id: <8333df218248b7bb08d5fe391c1b8e9e3f309588.1681508038.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763196612563319672?= X-GMAIL-MSGID: =?utf-8?q?1763196612563319672?= After the introduction of FOLL_SAME_FILE we no longer require vmas for any invocation of pin_user_pages(), so eliminate this parameter from the function and all callers. This clears the way to removing the vmas parameter from GUP altogether. Signed-off-by: Lorenzo Stoakes --- arch/powerpc/mm/book3s64/iommu_api.c | 2 +- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- drivers/infiniband/sw/siw/siw_mem.c | 2 +- drivers/media/v4l2-core/videobuf-dma-sg.c | 2 +- drivers/vdpa/vdpa_user/vduse_dev.c | 2 +- drivers/vhost/vdpa.c | 2 +- include/linux/mm.h | 3 +-- io_uring/rsrc.c | 2 +- mm/gup.c | 9 +++------ mm/gup_test.c | 9 ++++----- net/xdp/xdp_umem.c | 2 +- 12 files changed, 17 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 81d7185e2ae8..d19fb1f3007d 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -105,7 +105,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, FOLL_WRITE | FOLL_LONGTERM, - mem->hpages + entry, NULL); + mem->hpages + entry); if (ret == n) { pinned += n; continue; diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index f693bc753b6b..1bb7507325bc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -111,7 +111,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, ret = pin_user_pages(start_page + got * PAGE_SIZE, num_pages - got, FOLL_LONGTERM | FOLL_WRITE, - p + got, NULL); + p + got); if (ret < 0) { mmap_read_unlock(current->mm); goto bail_release; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 2a5cac2658ec..84e0f41e7dfa 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -140,7 +140,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, ret = pin_user_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), - gup_flags, page_list, NULL); + gup_flags, page_list); if (ret < 0) goto out; diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index f51ab2ccf151..e6e25f15567d 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -422,7 +422,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) umem->page_chunk[i].plist = plist; while (nents) { rv = pin_user_pages(first_page_va, nents, foll_flags, - plist, NULL); + plist); if (rv < 0) goto out_sem_up; diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 53001532e8e3..405b89ea1054 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -180,7 +180,7 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, data, size, dma->nr_pages); err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, gup_flags, - dma->pages, NULL); + dma->pages); if (err != dma->nr_pages) { dma->nr_pages = (err >= 0) ? err : 0; diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c index 0c3b48616a9f..1f80254604f0 100644 --- a/drivers/vdpa/vdpa_user/vduse_dev.c +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -995,7 +995,7 @@ static int vduse_dev_reg_umem(struct vduse_dev *dev, goto out; pinned = pin_user_pages(uaddr, npages, FOLL_LONGTERM | FOLL_WRITE, - page_list, NULL); + page_list); if (pinned != npages) { ret = pinned < 0 ? pinned : -ENOMEM; goto out; diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 7be9d9d8f01c..4317128c1c62 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -952,7 +952,7 @@ static int vhost_vdpa_pa_map(struct vhost_vdpa *v, while (npages) { sz2pin = min_t(unsigned long, npages, list_size); pinned = pin_user_pages(cur_base, sz2pin, - gup_flags, page_list, NULL); + gup_flags, page_list); if (sz2pin != pinned) { if (pinned < 0) { ret = pinned; diff --git a/include/linux/mm.h b/include/linux/mm.h index 8dfa236cfb58..3f7d36ad7de7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2382,8 +2382,7 @@ long pin_user_pages_remote(struct mm_struct *mm, long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages); long pin_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas); + unsigned int gup_flags, struct page **pages); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags); long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index adc860bcbd4f..92d0d47e322c 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -1157,7 +1157,7 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages) pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM | FOLL_SAME_FILE, - pages, NULL); + pages); if (pret == nr_pages) { /* * lookup the first VMA, we require that all VMAs in range diff --git a/mm/gup.c b/mm/gup.c index 3954ce499a4a..714970ef3b30 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3132,8 +3132,6 @@ EXPORT_SYMBOL(pin_user_pages_remote); * @gup_flags: flags modifying lookup behaviour * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * * Nearly the same as get_user_pages(), except that FOLL_TOUCH is not set, and * FOLL_PIN is set. @@ -3142,15 +3140,14 @@ EXPORT_SYMBOL(pin_user_pages_remote); * see Documentation/core-api/pin_user_pages.rst for details. */ long pin_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas) + unsigned int gup_flags, struct page **pages) { int locked = 1; - if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) return 0; return __gup_longterm_locked(current->mm, start, nr_pages, - pages, vmas, &locked, gup_flags); + pages, NULL, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages); diff --git a/mm/gup_test.c b/mm/gup_test.c index 9ba8ea23f84e..1668ce0e0783 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -146,18 +146,17 @@ static int __gup_test_ioctl(unsigned int cmd, pages + i); break; case PIN_BASIC_TEST: - nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i, - NULL); + nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i); break; case PIN_LONGTERM_BENCHMARK: nr = pin_user_pages(addr, nr, gup->gup_flags | FOLL_LONGTERM, - pages + i, NULL); + pages + i); break; case DUMP_USER_PAGES_TEST: if (gup->test_flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) nr = pin_user_pages(addr, nr, gup->gup_flags, - pages + i, NULL); + pages + i); else nr = get_user_pages(addr, nr, gup->gup_flags, pages + i); @@ -270,7 +269,7 @@ static inline int pin_longterm_test_start(unsigned long arg) gup_flags, pages); else cur_pages = pin_user_pages(addr, remaining_pages, - gup_flags, pages, NULL); + gup_flags, pages); if (cur_pages < 0) { pin_longterm_test_stop(); ret = cur_pages; diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c index 02207e852d79..06cead2b8e34 100644 --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -103,7 +103,7 @@ static int xdp_umem_pin_pages(struct xdp_umem *umem, unsigned long address) mmap_read_lock(current->mm); npgs = pin_user_pages(address, umem->npgs, - gup_flags | FOLL_LONGTERM, &umem->pgs[0], NULL); + gup_flags | FOLL_LONGTERM, &umem->pgs[0]); mmap_read_unlock(current->mm); if (npgs != umem->npgs) { From patchwork Fri Apr 14 23:27:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 83580 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp722776vqo; Fri, 14 Apr 2023 16:31:08 -0700 (PDT) X-Google-Smtp-Source: AKy350ZXi0xYOd3LcmPDFIoNp3F5IPT+pWkS4Ray6qyXL3GT7VwOGS1Rw86cWeDnuwplAGvpX7ak X-Received: by 2002:a17:902:f543:b0:1a2:37fc:b591 with SMTP id h3-20020a170902f54300b001a237fcb591mr4769765plf.69.1681515068501; Fri, 14 Apr 2023 16:31:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681515068; cv=none; d=google.com; s=arc-20160816; b=ToVLq1oigK5JiOsC+Od67eZ1w1o7FWItOobo9EIU+iLQmdGpwal6feNlT6jhzqN1YL hCz8nrSRcsRbKcBw1Ir46pYNgqp2JRKkL0L3lhBbfHsXoTFcbxFMqTg/uhk9I9f/CnuD UP7IvQxwtK/zVkdRjijIwRjJVMJ5edzqzdjM1H9bNKDIV/zcW6G+J+twj/99ZcWuCGmH 0XRAw9ydnZoR9nWUvIf1Akzw5kq+U0O+t1m4WnLEDa4SamL4cyTNkKMA2iilENhvw6tI aENlufZ1YCXHZ8oKAIhsXGAGM7B+OQ5uh5OubA6+XwfPTfyFhSN8vFoIcnWJhV2OtOQt 0sqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=uCs/sZPWJX1gtmxe2QOizxEtYJfYeMRPWTDbu9h9OZw=; b=zduI6EC4H5RgPXmco6ak8ngxo9KRhVBUNH3n/8GTFO7e3x6uIrNL5hyaRGHn29QJFb poN9/gCFu5cP8+2O3FTqc89vpULPCnosHvgoa8g95kkgzVVGIjwK4xf+eqrQfpCYkY3x 4WMbuAjX70F7XmQt/TQRMj/7PeG/j+B7rV/Ol77GY5P/r0D/iHLIm680yVjciGW+SzFS 2QJEkrAFVueIYOMqRTG0/sWGrFglFGtMdu1EGPefB0pqnYMhsaCM+zKac8LOTcFanru8 sw2kTkm9PNsBa8jfYGeq7Af/INvs5EhIefpxrgwZZqeFIy9cwur9JEq+FyAewhZmeNA7 f6wg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=ASr3PkFK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m22-20020a170902bb9600b0019ca3bea310si5397018pls.303.2023.04.14.16.30.56; Fri, 14 Apr 2023 16:31:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=ASr3PkFK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230152AbjDNX2o (ORCPT + 99 others); Fri, 14 Apr 2023 19:28:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229721AbjDNX2k (ORCPT ); Fri, 14 Apr 2023 19:28:40 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6465D186 for ; Fri, 14 Apr 2023 16:28:04 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3f09b9ac51dso3204625e9.0 for ; Fri, 14 Apr 2023 16:28:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681514875; x=1684106875; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uCs/sZPWJX1gtmxe2QOizxEtYJfYeMRPWTDbu9h9OZw=; b=ASr3PkFKYla0htw79h9hHYIpRA1kJhGFPUguxxnAW6pjKh4krF3I3pnhFFle8PAoZ5 7ZqFQUZExRsVGVVjvrk7Pl0k8Xv+2oRkHy8RhUmolQL/rknGUoveXnzpz+dK48r/76rY SxQxloSjdBNee7LsOl51+p/6fXT3YLr3JNEP9qI/rb0O4xfWudCKxie0p6KKS0XXBZKH wgF/Q6NsmV8LY6yD/CgnZ/egAPhyNZBEYvUR1JGyQFoxHyGHuO4WAHxGGfwH1wFlyRHz vReLWT00MchdJiMzQhC4CQeFOuD/MxTbrwRuxbGqLs9AkcS+D73sqoW6oI0OSN4iEa0e G3Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681514875; x=1684106875; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uCs/sZPWJX1gtmxe2QOizxEtYJfYeMRPWTDbu9h9OZw=; b=jNEn8dPWObZGNYx+lkLVUyyXUiRVZVdMilDoqp4h7CBZPEPBBj/dg9jE/xqKUJ5Arb 2cknlBq+OyXS+LghqGCgWEtL5ASciJz3I6wcXWsI1RxXXDb3nMudWIqx72Sl8Mwa62iY f3lAkYAGqxLln/uvnWcJfO3dOIqpTJ3vPLCE41tBvH9IDoeKBjZZ9pAmhS6JHwXLyqrn iLZ8sxHAktLB4ZsZaB9N9N+mjY8XMHuvSODZhN3bO30bRzHkw9z4xC9Udb2WkAaULQEq hxOcGuAe29b8fh0nkPrxunYUTk3qn4Ol2Zl5zdtNFhJX9tRo9mNAFcvfu8U8SD1Pts3m fCLg== X-Gm-Message-State: AAQBX9e7CUsrKv7IQ7P8dxxX0yBssEWWpvyZpzYPJGqjLx4m171V/F6V X8ChkpXqkpJoz/IzZRPOAZg= X-Received: by 2002:a05:6000:10c1:b0:2d0:58f9:a6b with SMTP id b1-20020a05600010c100b002d058f90a6bmr237185wrx.13.1681514875503; Fri, 14 Apr 2023 16:27:55 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id w16-20020a05600c475000b003f092f0e0a0sm12588375wmo.3.2023.04.14.16.27.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 16:27:54 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Matthew Wilcox , David Hildenbrand , Mike Kravetz , Muchun Song , Lorenzo Stoakes Subject: [PATCH 7/7] mm/gup: remove vmas array from internal GUP functions Date: Sat, 15 Apr 2023 00:27:52 +0100 Message-Id: <37b67ed80a952690ece6e1a9103c5702cce55ae2.1681508038.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763196344485267480?= X-GMAIL-MSGID: =?utf-8?q?1763196344485267480?= Now we have eliminated all callers to GUP APIs which use the vmas parameter, eliminate it altogether. This eliminates a class of bugs where vmas might have been kept around longer than the mmap_lock and thus we need not be concerned about locks being dropped during this operation leaving behind dangling pointers. This simplifies the GUP API and makes it considerably clearer as to its purpose - follow flags are applied and if pinning, an array of pages is returned. Signed-off-by: Lorenzo Stoakes --- include/linux/hugetlb.h | 10 ++--- mm/gup.c | 83 +++++++++++++++-------------------------- mm/hugetlb.c | 24 +++++------- 3 files changed, 45 insertions(+), 72 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 28703fe22386..2735e7a2b998 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -141,9 +141,8 @@ int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, - struct page **, struct vm_area_struct **, - unsigned long *, unsigned long *, long, unsigned int, - int *); + struct page **, unsigned long *, unsigned long *, + long, unsigned int, int *); void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); @@ -297,9 +296,8 @@ static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, static inline long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, - struct vm_area_struct **vmas, unsigned long *position, - unsigned long *nr_pages, long i, unsigned int flags, - int *nonblocking) + unsigned long *position, unsigned long *nr_pages, + long i, unsigned int flags, int *nonblocking) { BUG(); return 0; diff --git a/mm/gup.c b/mm/gup.c index 714970ef3b30..385e428a4acb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1028,8 +1028,6 @@ static int check_vma_flags(struct vm_area_struct *vma, struct file *file, * @pages: array that receives pointers to the pages pinned. * Should be at least nr_pages long. Or NULL, if caller * only intends to ensure the pages are faulted in. - * @vmas: array of pointers to vmas corresponding to each page. - * Or NULL if the caller does not require them. * @locked: whether we're still with the mmap_lock held * * Returns either number of pages pinned (which may be less than the @@ -1043,8 +1041,6 @@ static int check_vma_flags(struct vm_area_struct *vma, struct file *file, * * The caller is responsible for releasing returned @pages, via put_page(). * - * @vmas are valid only as long as mmap_lock is held. - * * Must be called with mmap_lock held. It may be released. See below. * * __get_user_pages walks a process's page tables and takes a reference to @@ -1080,7 +1076,7 @@ static int check_vma_flags(struct vm_area_struct *vma, struct file *file, static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *locked) + int *locked) { long ret = 0, i = 0; struct vm_area_struct *vma = NULL; @@ -1124,9 +1120,9 @@ static long __get_user_pages(struct mm_struct *mm, file = vma->vm_file; if (is_vm_hugetlb_page(vma)) { - i = follow_hugetlb_page(mm, vma, pages, vmas, - &start, &nr_pages, i, - gup_flags, locked); + i = follow_hugetlb_page(mm, vma, pages, + &start, &nr_pages, i, + gup_flags, locked); if (!*locked) { /* * We've got a VM_FAULT_RETRY @@ -1191,10 +1187,6 @@ static long __get_user_pages(struct mm_struct *mm, ctx.page_mask = 0; } next_page: - if (vmas) { - vmas[i] = vma; - ctx.page_mask = 0; - } page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm = nr_pages; @@ -1349,7 +1341,6 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, unsigned int flags) { @@ -1387,7 +1378,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, pages_done = 0; for (;;) { ret = __get_user_pages(mm, start, nr_pages, flags, pages, - vmas, locked); + locked); if (!(flags & FOLL_UNLOCKABLE)) { /* VM_FAULT_RETRY couldn't trigger, bypass */ pages_done = ret; @@ -1451,7 +1442,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, *locked = 1; ret = __get_user_pages(mm, start, 1, flags | FOLL_TRIED, - pages, NULL, locked); + pages, locked); if (!*locked) { /* Continue to retry until we succeeded */ BUG_ON(ret != 0); @@ -1549,7 +1540,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * not result in a stack expansion that recurses back here. */ ret = __get_user_pages(mm, start, nr_pages, gup_flags, - NULL, NULL, locked ? locked : &local_locked); + NULL, locked ? locked : &local_locked); lru_add_drain(); return ret; } @@ -1607,7 +1598,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, return -EINVAL; ret = __get_user_pages(mm, start, nr_pages, gup_flags, - NULL, NULL, locked); + NULL, locked); lru_add_drain(); return ret; } @@ -1675,8 +1666,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) #else /* CONFIG_MMU */ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, - unsigned int foll_flags) + int *locked, unsigned int foll_flags) { struct vm_area_struct *vma; bool must_unlock = false; @@ -1720,8 +1710,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, if (pages[i]) get_page(pages[i]); } - if (vmas) - vmas[i] = vma; + start = (start + PAGE_SIZE) & PAGE_MASK; } @@ -1902,8 +1891,7 @@ struct page *get_dump_page(unsigned long addr) int locked = 0; int ret; - ret = __get_user_pages_locked(current->mm, addr, 1, &page, NULL, - &locked, + ret = __get_user_pages_locked(current->mm, addr, 1, &page, &locked, FOLL_FORCE | FOLL_DUMP | FOLL_GET); return (ret == 1) ? page : NULL; } @@ -2076,7 +2064,6 @@ static long __gup_longterm_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, int *locked, unsigned int gup_flags) { @@ -2084,13 +2071,13 @@ static long __gup_longterm_locked(struct mm_struct *mm, long rc, nr_pinned_pages; if (!(gup_flags & FOLL_LONGTERM)) - return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + return __get_user_pages_locked(mm, start, nr_pages, pages, locked, gup_flags); flags = memalloc_pin_save(); do { nr_pinned_pages = __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, locked, + pages, locked, gup_flags); if (nr_pinned_pages <= 0) { rc = nr_pinned_pages; @@ -2108,9 +2095,8 @@ static long __gup_longterm_locked(struct mm_struct *mm, * Check that the given flags are valid for the exported gup/pup interface, and * update them with the required flags that the caller must have set. */ -static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, - int *locked, unsigned int *gup_flags_p, - unsigned int to_set) +static bool is_valid_gup_args(struct page **pages, int *locked, + unsigned int *gup_flags_p, unsigned int to_set) { unsigned int gup_flags = *gup_flags_p; @@ -2152,13 +2138,6 @@ static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, (gup_flags & FOLL_PCI_P2PDMA))) return false; - /* - * Can't use VMAs with locked, as locked allows GUP to unlock - * which invalidates the vmas array - */ - if (WARN_ON_ONCE(vmas && (gup_flags & FOLL_UNLOCKABLE))) - return false; - *gup_flags_p = gup_flags; return true; } @@ -2227,11 +2206,11 @@ long get_user_pages_remote(struct mm_struct *mm, { int local_locked = 1; - if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, + if (!is_valid_gup_args(pages, locked, &gup_flags, FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; - return __get_user_pages_locked(mm, start, nr_pages, pages, NULL, + return __get_user_pages_locked(mm, start, nr_pages, pages, locked ? locked : &local_locked, gup_flags); } @@ -2266,11 +2245,11 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, { int locked = 1; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - NULL, &locked, gup_flags); + &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages); @@ -2294,12 +2273,12 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_TOUCH | FOLL_UNLOCKABLE)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - NULL, &locked, gup_flags); + &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages_unlocked); @@ -2982,7 +2961,7 @@ static int internal_get_user_pages_fast(unsigned long start, start += nr_pinned << PAGE_SHIFT; pages += nr_pinned; ret = __gup_longterm_locked(current->mm, start, nr_pages - nr_pinned, - pages, NULL, &locked, + pages, &locked, gup_flags | FOLL_TOUCH | FOLL_UNLOCKABLE); if (ret < 0) { /* @@ -3024,7 +3003,7 @@ int get_user_pages_fast_only(unsigned long start, int nr_pages, * FOLL_FAST_ONLY is required in order to match the API description of * this routine: no fall back to regular ("slow") GUP. */ - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_GET | FOLL_FAST_ONLY)) return -EINVAL; @@ -3057,7 +3036,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, * FOLL_GET, because gup fast is always a "pin with a +1 page refcount" * request. */ - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_GET)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_GET)) return -EINVAL; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } @@ -3082,7 +3061,7 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast); int pin_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN)) return -EINVAL; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } @@ -3115,10 +3094,10 @@ long pin_user_pages_remote(struct mm_struct *mm, { int local_locked = 1; - if (!is_valid_gup_args(pages, NULL, locked, &gup_flags, + if (!is_valid_gup_args(pages, locked, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) return 0; - return __gup_longterm_locked(mm, start, nr_pages, pages, NULL, + return __gup_longterm_locked(mm, start, nr_pages, pages, locked ? locked : &local_locked, gup_flags); } @@ -3144,10 +3123,10 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, { int locked = 1; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN)) return 0; return __gup_longterm_locked(current->mm, start, nr_pages, - pages, NULL, &locked, gup_flags); + pages, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages); @@ -3161,11 +3140,11 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; - if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN | FOLL_TOUCH | FOLL_UNLOCKABLE)) return 0; - return __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, + return __gup_longterm_locked(current->mm, start, nr_pages, pages, &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages_unlocked); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a08fb47fb200..85138a0394b9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6371,17 +6371,14 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, - int refs, struct page **pages, - struct vm_area_struct **vmas) +static void record_subpages(struct page *page, struct vm_area_struct *vma, + int refs, struct page **pages) { int nr; for (nr = 0; nr < refs; nr++) { if (likely(pages)) pages[nr] = nth_page(page, nr); - if (vmas) - vmas[nr] = vma; } } @@ -6454,9 +6451,9 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, } long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, - struct page **pages, struct vm_area_struct **vmas, - unsigned long *position, unsigned long *nr_pages, - long i, unsigned int flags, int *locked) + struct page **pages, unsigned long *position, + unsigned long *nr_pages, long i, unsigned int flags, + int *locked) { unsigned long pfn_offset; unsigned long vaddr = *position; @@ -6584,7 +6581,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * If subpage information not requested, update counters * and skip the same_page loop below. */ - if (!pages && !vmas && !pfn_offset && + if (!pages && !pfn_offset && (vaddr + huge_page_size(h) < vma->vm_end) && (remainder >= pages_per_huge_page(h))) { vaddr += huge_page_size(h); @@ -6599,11 +6596,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, refs = min3(pages_per_huge_page(h) - pfn_offset, remainder, (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); - if (pages || vmas) - record_subpages_vmas(nth_page(page, pfn_offset), - vma, refs, - likely(pages) ? pages + i : NULL, - vmas ? vmas + i : NULL); + if (pages) + record_subpages(nth_page(page, pfn_offset), + vma, refs, + likely(pages) ? pages + i : NULL); if (pages) { /*