From patchwork Wed Jan 3 09:14:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 184676 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp4911575dyb; Wed, 3 Jan 2024 01:17:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IE8eUogZGTWpg1JKn38TjWv4gBCawzlw+jEWjdJgy0iHckiLjjVWmEZzzAwcz9aOmlSzZ10 X-Received: by 2002:a17:902:e789:b0:1d4:bf55:725d with SMTP id cp9-20020a170902e78900b001d4bf55725dmr1676454plb.36.1704273447139; Wed, 03 Jan 2024 01:17:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704273447; cv=none; d=google.com; s=arc-20160816; b=DTAogYH8vgTbo73GeuvhFwwiMJI6hl5bm43nWQ7l01wMiuP16jnT3DARaoLamtW2B7 OuSlHpwlWvRGdwf6uc5+RZFIiKYSgNnxK5Ve+ilLg0zX2rAYPZlSYgn2sUxK3zKXg7PF VIKOyS8KgP0S8ffVcn2qL5aMe94+Mlc/UcNWsjoUFqp6zueZVQls7lXa9woQy0pAVVAk dJhSzZFGQ9wWT7A1S03e18AtyBLDquAk/WwXuL7xCsGi7FIWjq13OSiEIj6cGPHSVBlY Og0CayrSdvK0/32Zob3yKp+Ad4BLFAN5QYkOULnxylN1Oy4qzikbIsJiXcZG8Xte/DAs H8iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=w96HtEkzdknb+h0JjsHCRRCM9jOy1Kg+MIEx5UNvA1Y=; fh=JvQ3nGflNTIwPBfhSW2OJAIjHOHR+R1SiFkwzYoQoWY=; b=a9qsKslBfWgmK5LO6hyka6BlIWzt27ih/ns4xUMif76F9ohVaRr18ZKuuyEWt60Wr4 p9ebwIHzE9Cy50jmCmSAbQD9lwVuAZP1l65eaY1jqyfbBjV7kEK4XQXq7DLUqnPUKfSl 12HB3sbf/mYtIWa94lhnNBTc6XUFNemXUDsYuGf41kw07KTPstbGsX/2LYHNY1eTEN71 j9FfyOpMW1WrXl0wIcTqAruXf/Blh+ahCXk7BgOVdlx342BQUxecy6djphNCg4BhZHxb 4m2K9i4prd1YI1BimmyCbUwgB1g77Uu4SYjkxIY8ebaxq/hm02cW0uOtF4Ze2R0yLqWo yS0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="c21Pn/6/"; spf=pass (google.com: domain of linux-kernel+bounces-15322-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15322-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id g7-20020a170902934700b001d42ccfc41asi17676456plp.647.2024.01.03.01.17.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Jan 2024 01:17:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-15322-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="c21Pn/6/"; spf=pass (google.com: domain of linux-kernel+bounces-15322-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15322-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E28862832D3 for ; Wed, 3 Jan 2024 09:17:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E4B35199C6; Wed, 3 Jan 2024 09:16:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="c21Pn/6/" X-Original-To: linux-kernel@vger.kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02419199A4 for ; Wed, 3 Jan 2024 09:16:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273370; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w96HtEkzdknb+h0JjsHCRRCM9jOy1Kg+MIEx5UNvA1Y=; b=c21Pn/6/nCBSDLyAT3izS9wWdvVfGGVOM5wGNqk95M0KZWcjtTVKbYWRSwmsvWj/j064v4 msXf4b4nv/rn8+z4GJbitHGjz8vYdKnELCNVBbUiuTn4RvUc8BXb2uz6k10LN1vdNZdKWO xFSEnTu2zlVET6OnmQsugQB7+sAqCFA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-277--A0wl26XO_yPNePIDNUfuA-1; Wed, 03 Jan 2024 04:16:06 -0500 X-MC-Unique: -A0wl26XO_yPNePIDNUfuA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B7A993806714; Wed, 3 Jan 2024 09:16:04 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 29D1F492BE6; Wed, 3 Jan 2024 09:15:52 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page Date: Wed, 3 Jan 2024 17:14:17 +0800 Message-ID: <20240103091423.400294-8-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787060234381541021 X-GMAIL-MSGID: 1787060234381541021 From: Peter Xu All the fast-gup functions take a tail page to operate, always need to do page mask calculations before feeding that into record_subpages(). Merge that logic into record_subpages(), so that it will do the nth_page() calculation. Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- mm/gup.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index fa93e14b7fca..3813aad79c4a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long addr, - unsigned long end, struct page **pages) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) { + struct page *start_page; int nr; + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(page, nr); + pages[nr] = nth_page(start_page, nr); return nr; } @@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pmd_page(orig); + refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pud_page(orig); + refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, BUILD_BUG_ON(pgd_devmap(orig)); - page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pgd_page(orig); + refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio)