From patchwork Tue Oct 3 09:15:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147775 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1954521vqb; Tue, 3 Oct 2023 02:15:31 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHHK8G0rASKS5RG3/IxAaogy8+4Sk4lHSt3uSN+Yo73mkvXkotDQi+lbpbzRCN+JIC+M58A X-Received: by 2002:a9d:67d9:0:b0:6b9:5b04:8cb0 with SMTP id c25-20020a9d67d9000000b006b95b048cb0mr12133297otn.9.1696324531692; Tue, 03 Oct 2023 02:15:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696324531; cv=none; d=google.com; s=arc-20160816; b=bZrOTY+zkAyqm0M9DyJ3bU/Dlhi1GdaG8V+gk50bCGd3hUypxbO3+b66eLfcDJlixz OFb39d8kuoJeTnsvVJ8FJq7cpoyko7tHhBOAchJOrVdRqFvYQ1oc0jNMJm5lv19gR5Nn pBRJDHDqt41y4p5RxVhVX3Ct0/eoRtM3qBFhKv5Y5fOtyyq4LLQ+l1SiR49CEstLjD9i 93VV5ASudbn1e9l3OMzsuOhH1gjGpiZF+PPtCIRehqe2DohK5fM90GLIAvhoPA5zK7V2 2sgrqHSSpftl0zRZyBnNXg+9KuoQ1jnbSyNjY8mG60rhtD97ZepJpEjleu9ggisv5I1g qXnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=VwYZSOOB/glvz2EnW/qnimnVXLIQ+wJ0Ri+lf09J6gI=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=riuzpwthCiwlhYXjo7RZ+Rot1EdzUCsQ+7TINnJQpO8jkyR4ERmm6atm82AhRcflrY uOhZm+Mrzi0A5LwevLWxqIvBTXz3DiHrlF6bA5gmx5cz5HYQvrWXw47UTBxtVWPt1kBu qxRnKAS2iQl9Nc994wNGRDGnpK4aCnxTHRFTKefn20hHeNjdsW2ztzYJ7MYjf49SwA62 MqF7p/mWGgyvHfo4JN1O6UCElWOCrHQ4p+sTe/NDlOy/PdBF6PCmmeTTMgZO48gAxWQo 1YfpIz+RNPFU5FlaGt2tIFu8+wE0RIBkKXGFpx+tTzs0jTglqQJcAk3CDCRYcKvLkuuG E7OA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=OnZFZ40A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id l195-20020a633ecc000000b00578d2d19575si1012038pga.237.2023.10.03.02.15.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:15:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=OnZFZ40A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id E97C582723BD; Tue, 3 Oct 2023 02:15:30 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239513AbjJCJP0 (ORCPT + 18 others); Tue, 3 Oct 2023 05:15:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231509AbjJCJPY (ORCPT ); Tue, 3 Oct 2023 05:15:24 -0400 Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com [IPv6:2607:f8b0:4864:20::112d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6F57AB for ; Tue, 3 Oct 2023 02:15:19 -0700 (PDT) Received: by mail-yw1-x112d.google.com with SMTP id 00721157ae682-59c268676a9so8601177b3.0 for ; Tue, 03 Oct 2023 02:15:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696324519; x=1696929319; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=VwYZSOOB/glvz2EnW/qnimnVXLIQ+wJ0Ri+lf09J6gI=; b=OnZFZ40Azgj3uWjlyJMTLvLXYlxokqrNuXf+bhobnWBUVAfcXDn78w24FTdLCn/7WH yKQ5JEcuiKlonNiOl7rJiFODJeflZEINrcKG7B10BHqifAJaRNLc+9kzPowPSqFD2Wmz qRlgt96bzNoom+QD74fwSo5alglnbFuFprHeZrRlA8vQMubH+3oGNtSeyjv2jEODUMFp qoqLkpuJ0XX4f1FEPdPWXcZLtT0PhpLnwTkAPhvVWPB9Xb0aUi0hJEbQ0+ZDu4qTvYwZ nzOB/aoqMxEm9+ctSuimi4NQvWJE0bzsR3LmpG0RTYOht7LHQzGOkthmp//S5kwgUZY4 tA8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696324519; x=1696929319; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VwYZSOOB/glvz2EnW/qnimnVXLIQ+wJ0Ri+lf09J6gI=; b=gmVMnZPDwbOe2z+n74rJhSmnddksScwh7U3VXke6TBaP2Gh5tMaQMBcjBu4kRH64/a Ccs606GH0c/gSQOkUuEL83O/QxppdkmXSXgNYgn9swuFqXYrvJYDF2aHwvcWdqYo9DWQ avN9UdcLqiQ3g5PcHpw3tVg8gmZr6xd4PyOioxUig6gb2O5T5k39tmaVHntJQFeo/wzF hR0e6DiK9U7EsDSao8UGyJwu9HV3qgKERZ2DkBK0Vt1ixRjnNAP6pfcCNEdFC+UeAEnq 0E0AJf5h9tgso8B73ID6Jzj+vQElJJkT5kIji4/DNSvKJKZ2NVUEo1XBDrmea4kcRdhr 1OaA== X-Gm-Message-State: AOJu0YycllyLtB3qKeeEjd68BffeGmY05NbcasdITmAxA4irv2jkMahz uOm2A1Sz0+mDBaww03Z/Yt7eCw== X-Received: by 2002:a81:8907:0:b0:59b:518a:639c with SMTP id z7-20020a818907000000b0059b518a639cmr14262844ywf.36.1696324518906; Tue, 03 Oct 2023 02:15:18 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id m131-20020a817189000000b005950e1bbf11sm244244ywc.60.2023.10.03.02.15.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:15:18 -0700 (PDT) Date: Tue, 3 Oct 2023 02:15:09 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 01/12] hugetlbfs: drop shared NUMA mempolicy pretence In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:15:31 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725191837519362 X-GMAIL-MSGID: 1778725191837519362 hugetlbfs_fallocate() goes through the motions of pasting a shared NUMA mempolicy onto its pseudo-vma, but how could there ever be a shared NUMA mempolicy for this file? hugetlb_vm_ops has never offered a set_policy method, and hugetlbfs_parse_param() has never supported any mpol options for a mount-wide default policy. It's just an illusion: clean it away so as not to confuse others, giving us more freedom to adjust shmem's set_policy/get_policy implementation. But hugetlbfs_inode_info is still required, just to accommodate seals. Yes, shared NUMA mempolicy support could be added to hugetlbfs, with a set_policy method and/or mpol mount option (Andi's first posting did include an admitted-unsatisfactory hugetlb_set_policy()); but it seems that nobody has bothered to add that in the nineteen years since v2.6.7 made it possible, and there is at least one company that has invested enough into hugetlbfs, that I guess they have learnt well enough how to manage its NUMA, without needing shared mempolicy. Remove linux/mempolicy.h from linux/hugetlb.h: include linux/pagemap.h in its place, because hugetlb.h's recently added use of filemap_lock_folio() requires that (although most .configs and .c's get it in some other way). Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) --- fs/hugetlbfs/inode.c | 41 +---------------------------------------- include/linux/hugetlb.h | 3 +-- 2 files changed, 2 insertions(+), 42 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 926d01c493fb..0586c90cb9a5 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -83,29 +83,6 @@ static const struct fs_parameter_spec hugetlb_fs_parameters[] = { {} }; -#ifdef CONFIG_NUMA -static inline void hugetlb_set_vma_policy(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index) -{ - vma->vm_policy = mpol_shared_policy_lookup(&HUGETLBFS_I(inode)->policy, - index); -} - -static inline void hugetlb_drop_vma_policy(struct vm_area_struct *vma) -{ - mpol_cond_put(vma->vm_policy); -} -#else -static inline void hugetlb_set_vma_policy(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index) -{ -} - -static inline void hugetlb_drop_vma_policy(struct vm_area_struct *vma) -{ -} -#endif - /* * Mask used when checking the page offset value passed in via system * calls. This value will be converted to a loff_t which is signed. @@ -853,8 +830,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, /* * Initialize a pseudo vma as this is required by the huge page - * allocation routines. If NUMA is configured, use page index - * as input to create an allocation policy. + * allocation routines. */ vma_init(&pseudo_vma, mm); vm_flags_init(&pseudo_vma, VM_HUGETLB | VM_MAYSHARE | VM_SHARED); @@ -902,9 +878,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, * folios in these areas, we need to consume the reserves * to keep reservation accounting consistent. */ - hugetlb_set_vma_policy(&pseudo_vma, inode, index); folio = alloc_hugetlb_folio(&pseudo_vma, addr, 0); - hugetlb_drop_vma_policy(&pseudo_vma); if (IS_ERR(folio)) { mutex_unlock(&hugetlb_fault_mutex_table[hash]); error = PTR_ERR(folio); @@ -1283,18 +1257,6 @@ static struct inode *hugetlbfs_alloc_inode(struct super_block *sb) hugetlbfs_inc_free_inodes(sbinfo); return NULL; } - - /* - * Any time after allocation, hugetlbfs_destroy_inode can be called - * for the inode. mpol_free_shared_policy is unconditionally called - * as part of hugetlbfs_destroy_inode. So, initialize policy here - * in case of a quick call to destroy. - * - * Note that the policy is initialized even if we are creating a - * private inode. This simplifies hugetlbfs_destroy_inode. - */ - mpol_shared_policy_init(&p->policy, NULL); - return &p->vfs_inode; } @@ -1306,7 +1268,6 @@ static void hugetlbfs_free_inode(struct inode *inode) static void hugetlbfs_destroy_inode(struct inode *inode) { hugetlbfs_inc_free_inodes(HUGETLBFS_SB(inode->i_sb)); - mpol_free_shared_policy(&HUGETLBFS_I(inode)->policy); } static const struct address_space_operations hugetlbfs_aops = { diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3c4427a2396d..a574e26e18a2 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -30,7 +30,7 @@ void free_huge_folio(struct folio *folio); #ifdef CONFIG_HUGETLB_PAGE -#include +#include #include #include @@ -513,7 +513,6 @@ static inline struct hugetlbfs_sb_info *HUGETLBFS_SB(struct super_block *sb) } struct hugetlbfs_inode_info { - struct shared_policy policy; struct inode vfs_inode; unsigned int seals; }; From patchwork Tue Oct 3 09:16:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147776 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1955071vqb; Tue, 3 Oct 2023 02:16:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEBbEM7cy9tiB2UlY00pebmOray/sBMIkithgpTPDje2r162KIFRFUGOdLK0s2Tohnx+Edq X-Received: by 2002:a05:6a20:3d94:b0:158:1387:6a95 with SMTP id s20-20020a056a203d9400b0015813876a95mr13309197pzi.19.1696324613129; Tue, 03 Oct 2023 02:16:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696324613; cv=none; d=google.com; s=arc-20160816; b=LKr8sxc7sXG137XU450P6XvbsispC+C7cvLpAMbzMHsnFEdyQXWMUxA+9OQIsYElDg mNH/kaV77pl/1XEgl8GKbEPclq0kajC8aSPdgxHwyU5S+TosQXlVHw+jxasxp+scBPiw IMB0uJ5A0Z1IJF0oufTbDyuGPaLwGA2ErL4FokcZdnjVtOhGeqFgtXOgvBWaIFeW55Gm OwdKrE38FtUfb5C83Cmw3YL8HIHMrPBTiHLuio57lS/OjNk4uZivYMpZjscBseD1N51s YjvXav/i6dlgstC2ggQ14Twm1DfN3wQjvKxRpzCQkinlQbrNwQh3/ud7apsfxkZBuMMS 71Tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=6szECgvu3nTPHPzEk6DBfoYWo45M6R50iawIs9FUBJg=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=SgMgbwDUUywuKxKkzsodO2t6Y7ObEdgQ47Fm/C5/I55s71xfumRtnKa+DSJY6oUZZm PHe8KuTV/RALIFs/kOkDYsLCh8exT29ZGb1NQq+8yaYXDyjAIb57H+nBiEzUfGFnA2DR URnVAYjlxDVgpqOiMJ+DA73QJJ/4W3L8WMEh8gnuzVzh7NeSXnqnyiomgNwDi33H+g0/ TJm62o7hOrnFMsWX48+noGlJdzWOhBp+Pd5oLI566v84RnocqVqcQTo5Dzd0NuLHBWvd MkTe1zZ197oyXLU+hsAfDbF6ASd0ULim7owsHnTFZ683qiQ6CNS+FiUMg2NxvCGQ3TQr hVWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=1PDhxzPP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id k13-20020aa788cd000000b006933bf8404dsi1107162pff.10.2023.10.03.02.16.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:16:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=1PDhxzPP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id A8CBC80FF299; Tue, 3 Oct 2023 02:16:50 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231533AbjJCJQh (ORCPT + 18 others); Tue, 3 Oct 2023 05:16:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231458AbjJCJQg (ORCPT ); Tue, 3 Oct 2023 05:16:36 -0400 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD8D3A9 for ; Tue, 3 Oct 2023 02:16:32 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-5a1d0fee86aso8520767b3.2 for ; Tue, 03 Oct 2023 02:16:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696324592; x=1696929392; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=6szECgvu3nTPHPzEk6DBfoYWo45M6R50iawIs9FUBJg=; b=1PDhxzPPHzsD6Urjg+kXaM0DBAlDyXf3oNm9lYy5ypKB1CzPiUY9BJcGdPmMAch3hR en/wwPJw/j4IPgO6xmJrDPo0OBUvENSdaqjc8jKvyGQEWa2zT5vpOPIz7sUW6SQi1vK4 8Nd+Dr9ctVakimyk7nSee2qlmy5tFRL7f5yvP019Iu+uSNth9D7JLZIfrL79iJYCBBc/ bk9b8PFctVdi1b2+//sMLmhXpIZ7lo3Zixg1VzcpA9F8nGt4RthoCCpa+W54CaWX2eI2 sDfY3SMbqey6ex6JlaKXE/QYMvRlcx0S8EqjrHxSJf5oa2M7Z18g4wd/SIAEoc2RFQ9K WaBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696324592; x=1696929392; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6szECgvu3nTPHPzEk6DBfoYWo45M6R50iawIs9FUBJg=; b=c8YujxbmzhhkmW3QN8KXTcsfrGNOb4QGrGMC+IPDej2rv2RKNWmR8ekLHnQSdn3la7 T3eNA/gEXM7BuKxmlMCnxbsDtPc/cEu+7SCAh6OJnI4JeCC40ek4VwOfbOu5jIhiDso/ mGj5MYTyCeyoAvV8CT5AFitFcjhag+/rH4NDn6P4uTKd7Zqk79PtSfY5Ltyghs6djSrs aaJRiS42YTSSp50K4TD0yXZZUgE/q0zGjt/KLUCuknojnF5BR0X7okJcCfN5LSO47EAr gm8HDrlB1TH1BcsHLbH6gdOSaECOmLcx1IeKzxrtvNKcWcK6CMiAkOTJyWAgi3+GYeUl O8Xw== X-Gm-Message-State: AOJu0YxXU73Kq81LJsT5CAbWnteA2GdDVKGibVqxPBbnWKbKvvBjLGrE 15WhUxGt4uoR6FvhyASXRDCURQ== X-Received: by 2002:a0d:ee46:0:b0:5a1:635e:e68 with SMTP id x67-20020a0dee46000000b005a1635e0e68mr11991552ywe.46.1696324591927; Tue, 03 Oct 2023 02:16:31 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id m14-20020a819c0e000000b00583f8f41cb8sm244237ywa.63.2023.10.03.02.16.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:16:31 -0700 (PDT) Date: Tue, 3 Oct 2023 02:16:29 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 02/12] kernfs: drop shared NUMA mempolicy hooks In-Reply-To: Message-ID: <302164-a760-4a9e-879b-6870c9b4013@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:16:50 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725277012247921 X-GMAIL-MSGID: 1778725277012247921 It seems strange that kernfs should be an outlier with a set_policy and get_policy in its kernfs_vm_ops. Ah, it dates back to v2.6.30's commit 095160aee954 ("sysfs: fix some bin_vm_ops errors"), when I had crashed on powerpc's pci_mmap_legacy_page_range() fallback to shmem_zero_setup(). Well, that was commendably thorough, to give sysfs-bin a set_policy and get_policy, just to avoid the way it was coded resulting in EINVAL from mmap when CONFIG_NUMA; but somehow feels a bit over-the-top to me now. It's easier to say that nobody should expect to manage a shmem object's shared NUMA mempolicy via some kernfs backdoor to that object: delete that code (and there's no longer an EINVAL from mmap in the NUMA case). This then leaves set_policy/get_policy as implemented only by shmem - though importantly also by SysV SHM, which has to interface with shmem which implements them, and with SHM_HUGETLB which does not. Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) --- fs/kernfs/file.c | 49 ------------------------------------------------- 1 file changed, 49 deletions(-) diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c index 180906c36f51..aaa76410e550 100644 --- a/fs/kernfs/file.c +++ b/fs/kernfs/file.c @@ -429,60 +429,11 @@ static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr, return ret; } -#ifdef CONFIG_NUMA -static int kernfs_vma_set_policy(struct vm_area_struct *vma, - struct mempolicy *new) -{ - struct file *file = vma->vm_file; - struct kernfs_open_file *of = kernfs_of(file); - int ret; - - if (!of->vm_ops) - return 0; - - if (!kernfs_get_active(of->kn)) - return -EINVAL; - - ret = 0; - if (of->vm_ops->set_policy) - ret = of->vm_ops->set_policy(vma, new); - - kernfs_put_active(of->kn); - return ret; -} - -static struct mempolicy *kernfs_vma_get_policy(struct vm_area_struct *vma, - unsigned long addr) -{ - struct file *file = vma->vm_file; - struct kernfs_open_file *of = kernfs_of(file); - struct mempolicy *pol; - - if (!of->vm_ops) - return vma->vm_policy; - - if (!kernfs_get_active(of->kn)) - return vma->vm_policy; - - pol = vma->vm_policy; - if (of->vm_ops->get_policy) - pol = of->vm_ops->get_policy(vma, addr); - - kernfs_put_active(of->kn); - return pol; -} - -#endif - static const struct vm_operations_struct kernfs_vm_ops = { .open = kernfs_vma_open, .fault = kernfs_vma_fault, .page_mkwrite = kernfs_vma_page_mkwrite, .access = kernfs_vma_access, -#ifdef CONFIG_NUMA - .set_policy = kernfs_vma_set_policy, - .get_policy = kernfs_vma_get_policy, -#endif }; static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma) From patchwork Tue Oct 3 09:17:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147777 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1955598vqb; Tue, 3 Oct 2023 02:18:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFolo9izyZaH+JdG0jVlBLohjsTUd6OKvO3SjtBkFVAv6VAjMq7QbfKI1S3QtfG03UkzOwF X-Received: by 2002:a05:6a20:3c87:b0:15e:bcd:57f5 with SMTP id b7-20020a056a203c8700b0015e0bcd57f5mr13876632pzj.3.1696324683904; Tue, 03 Oct 2023 02:18:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696324683; cv=none; d=google.com; s=arc-20160816; b=0svuIagiDK6xPKti+xFvgJQ2pQFH4eryiItnWsViL1m3PTpwNaUqRmnpQPSc2Ztmzr 7IVuiVBKqzfquFo1OcmnqHleV4j+tCR3H8TYeJH3pMnsoCet6yHs29ebEzft4ySTvvyr 76xuV5mENMPSMNAf89F9ro0iSxm3mV1wq3RR1h7ndN5yS+eWyREuHS5BTrexh48p5Mq+ s4mTsr6CKYzHOuUFCZc59v7fjesB0b1odzKfNKBgETorBFAfJlYBK5MdbQ/OP3wrQFPY Q5NwKbEMoQDyo+jDViqXBc0KnoBkmSd0aRbdmdSjx58c6autG/HF+eTd0qu6JINCzvol d6Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=Zn2NsWuFnhQFrO/gKSp+dM0OPa5MQSJrmStGl/lqBlA=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=Tk2e2TJhjEuSjfCoh1Q7JJCpvlBrZoTBX6qFKS7bNMM2sTXz2Y66nk+PJB1K/aMzow VsKUk3uYAoFOU5TKHyMdePqjQLza64eZzWAHLOz8UFZPTsRpMM9MruhTJU5MyWku/xmt utBy1DCpqg7Hh5UZH3GdLobdwUx4lNfnhWNGUBF6uLtTwSKQXRTryqshT2xM/V2epZ8f ecCYTJtvrS0uok2oKbBU63e1c5J865RNURTDvspjTG+faRfrou3XlCWf+984kfabg/r/ NR8+M9vq8tMNpGdeWi28UIkBF6eksOJS1C4jvqJV7NdqzDyis0U1TeOrZVV55UanJ2IR inhg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=cpvyBZsG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id o17-20020a17090323d100b001c58ac5b753si972607plh.120.2023.10.03.02.18.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:18:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=cpvyBZsG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 18C7B8134CE3; Tue, 3 Oct 2023 02:18:03 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239567AbjJCJR4 (ORCPT + 18 others); Tue, 3 Oct 2023 05:17:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231458AbjJCJRx (ORCPT ); Tue, 3 Oct 2023 05:17:53 -0400 Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com [IPv6:2607:f8b0:4864:20::112c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F060AC for ; Tue, 3 Oct 2023 02:17:47 -0700 (PDT) Received: by mail-yw1-x112c.google.com with SMTP id 00721157ae682-59c215f2f4aso8636887b3.1 for ; Tue, 03 Oct 2023 02:17:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696324666; x=1696929466; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=Zn2NsWuFnhQFrO/gKSp+dM0OPa5MQSJrmStGl/lqBlA=; b=cpvyBZsGl9QtgHrKHHxS4JL+/8dvOxbU7kwuFXrWYbhlUO9Xuc11CoTrGlLU5D5KSy JYDCRfJcN/09MQeoEHqWZfX/m9lqju1B3aD+526SkT+ns683+shvVPq1wq1JlUn0fFba NU9WvMRBVtFqZCr36fyvFiiB8M0TNipacxiXPePmJZcCYCHvDt3MVO8VImX19b29HxxO fd0iunpqBZ0YOP/KGCh/z2oQIDG/CAqCOgwFTTktqSbAFUF5rowRyzXtKbmB78Up0cMq L7XiO/hgNrGWHt9/rOTKsucpqFQ31ycUalVS+ewqSTJ1+X8I0lFwM3DCwjAos8CfI+fL 6/Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696324666; x=1696929466; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zn2NsWuFnhQFrO/gKSp+dM0OPa5MQSJrmStGl/lqBlA=; b=roWsimSU0i+fknDWHi5fzAOqpNPjcBdps7SFTB+6RUAxZ1Vz6kPAAtoxmTaS++wB/8 KM9E5BcklGdNmlxs9EpJYUdae1xmoGgp//1i3vgFUJwwd6uYJR6g/M4u+q50093Mt+IK Zdk4CCVZA6MyfCWjTcVAfhHraCU+UvR603RpHqTV+KUGDcj0F20KEkX9NLHkv3eK/kyW 5BHLKHlYpi31xuVEm+rXvgI7srMMzUC0SLTNtAtX5+tNcctH7Bcw2T2nRG+tjp4nocs/ xqHY0iJzUK1mfAuE6JiXaYmGVDysWLeYn59HXVL2mMu9eac/OlX4vJyl48xrVaGq8+7u bJsw== X-Gm-Message-State: AOJu0YyKVU6pCV0fXUwjz4Zz0ggdaW3PWHRLVpY1Lk73vGoA4x9Lx9MK G75rBHMKMNkRHeTVXd1Sr5VzLg== X-Received: by 2002:a25:b10f:0:b0:d7e:dd21:9b16 with SMTP id g15-20020a25b10f000000b00d7edd219b16mr12705693ybj.8.1696324666069; Tue, 03 Oct 2023 02:17:46 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id a9-20020a259389000000b00d8128f9a46bsm279851ybm.37.2023.10.03.02.17.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:17:45 -0700 (PDT) Date: Tue, 3 Oct 2023 02:17:43 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 03/12] mempolicy: fix migrate_pages(2) syscall return nr_failed In-Reply-To: Message-ID: <9a6b0b9-3bb-dbef-8adf-efab4397b8d@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:18:03 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725351885708833 X-GMAIL-MSGID: 1778725351885708833 "man 2 migrate_pages" says "On success migrate_pages() returns the number of pages that could not be moved". Although 5.3 and 5.4 commits fixed mbind(MPOL_MF_STRICT|MPOL_MF_MOVE*) to fail with EIO when not all pages could be moved (because some could not be isolated for migration), migrate_pages(2) was left still reporting only those pages failing at the migration stage, forgetting those failing at the earlier isolation stage. Fix that by accumulating a long nr_failed count in struct queue_pages, returned by queue_pages_range() when it's not returning an error, for adding on to the nr_failed count from migrate_pages() in mm/migrate.c. A count of pages? It's more a count of folios, but changing it to pages would entail more work (also in mm/migrate.c): does not seem justified. queue_pages_range() itself should only return -EIO in the "strictly unmovable" case (STRICT without any MOVEs): in that case it's best to break out as soon as nr_failed gets set; but otherwise it should continue to isolate pages for MOVing even when nr_failed - as the mbind(2) manpage promises. There's a case when nr_failed should be incremented when it was missed: queue_folios_pte_range() and queue_folios_hugetlb() count the transient migration entries, like queue_folios_pmd() already did. And there's a case when nr_failed should not be incremented when it would have been: in meeting later PTEs of the same large folio, which can only be isolated once: fixed by recording the current large folio in struct queue_pages. Clean up the affected functions, fixing or updating many comments. Bool migrate_folio_add(), without -EIO: true if adding, or if skipping shared (but its arguable folio_estimated_sharers() heuristic left unchanged). Use MPOL_MF_WRLOCK flag to queue_pages_range(), instead of bool lock_vma. Use explicit STRICT|MOVE* flags where queue_pages_test_walk() checks for skipping, instead of hiding them behind MPOL_MF_VALID. Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: "Huang, Ying" --- mm/mempolicy.c | 342 ++++++++++++++++++++++++--------------------------- 1 file changed, 161 insertions(+), 181 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 38a47fa33ef4..752d880dcdf8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -111,7 +111,8 @@ /* Internal flags */ #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0) /* Skip checks for continuous vmas */ -#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ +#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ +#define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */ static struct kmem_cache *policy_cache; static struct kmem_cache *sn_cache; @@ -416,9 +417,19 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { }, }; -static int migrate_folio_add(struct folio *folio, struct list_head *foliolist, +static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist, unsigned long flags); +static bool strictly_unmovable(unsigned long flags) +{ + /* + * STRICT without MOVE flags lets do_mbind() fail immediately with -EIO + * if any misplaced page is found. + */ + return (flags & (MPOL_MF_STRICT | MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) == + MPOL_MF_STRICT; +} + struct queue_pages { struct list_head *pagelist; unsigned long flags; @@ -426,7 +437,8 @@ struct queue_pages { unsigned long start; unsigned long end; struct vm_area_struct *first; - bool has_unmovable; + struct folio *large; /* note last large folio encountered */ + long nr_failed; /* could not be isolated at this time */ }; /* @@ -444,61 +456,37 @@ static inline bool queue_folio_required(struct folio *folio, return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT); } -/* - * queue_folios_pmd() has three possible return values: - * 0 - folios are placed on the right node or queued successfully, or - * special page is met, i.e. zero page, or unmovable page is found - * but continue walking (indicated by queue_pages.has_unmovable). - * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an - * existing folio was already on a node that does not follow the - * policy. - */ -static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, - unsigned long end, struct mm_walk *walk) - __releases(ptl) +static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk) { - int ret = 0; struct folio *folio; struct queue_pages *qp = walk->private; - unsigned long flags; if (unlikely(is_pmd_migration_entry(*pmd))) { - ret = -EIO; - goto unlock; + qp->nr_failed++; + return; } folio = pfn_folio(pmd_pfn(*pmd)); if (is_huge_zero_page(&folio->page)) { walk->action = ACTION_CONTINUE; - goto unlock; + return; } if (!queue_folio_required(folio, qp)) - goto unlock; - - flags = qp->flags; - /* go to folio migration */ - if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { - if (!vma_migratable(walk->vma) || - migrate_folio_add(folio, qp->pagelist, flags)) { - qp->has_unmovable = true; - goto unlock; - } - } else - ret = -EIO; -unlock: - spin_unlock(ptl); - return ret; + return; + if (!(qp->flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) || + !vma_migratable(walk->vma) || + !migrate_folio_add(folio, qp->pagelist, qp->flags)) + qp->nr_failed++; } /* - * Scan through pages checking if pages follow certain conditions, - * and move them to the pagelist if they do. + * Scan through folios, checking if they satisfy the required conditions, + * moving them from LRU to local pagelist for migration if they do (or not). * - * queue_folios_pte_range() has three possible return values: - * 0 - folios are placed on the right node or queued successfully, or - * special page is met, i.e. zero page, or unmovable page is found - * but continue walking (indicated by queue_pages.has_unmovable). - * -EIO - only MPOL_MF_STRICT was specified and an existing folio was already - * on a node that does not follow the policy. + * queue_folios_pte_range() has two possible return values: + * 0 - continue walking to scan for more, even if an existing folio on the + * wrong node could not be isolated and queued for migration. + * -EIO - only MPOL_MF_STRICT was specified, without MPOL_MF_MOVE or ..._ALL, + * and an existing folio was on a node that does not follow the policy. */ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -512,8 +500,11 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, spinlock_t *ptl; ptl = pmd_trans_huge_lock(pmd, vma); - if (ptl) - return queue_folios_pmd(pmd, ptl, addr, end, walk); + if (ptl) { + queue_folios_pmd(pmd, walk); + spin_unlock(ptl); + goto out; + } mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); if (!pte) { @@ -522,8 +513,13 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, } for (; addr != end; pte++, addr += PAGE_SIZE) { ptent = ptep_get(pte); - if (!pte_present(ptent)) + if (pte_none(ptent)) continue; + if (!pte_present(ptent)) { + if (is_migration_entry(pte_to_swp_entry(ptent))) + qp->nr_failed++; + continue; + } folio = vm_normal_folio(vma, addr, ptent); if (!folio || folio_is_zone_device(folio)) continue; @@ -535,95 +531,87 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, continue; if (!queue_folio_required(folio, qp)) continue; - if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { + if (folio_test_large(folio)) { /* - * MPOL_MF_STRICT must be specified if we get here. - * Continue walking vmas due to MPOL_MF_MOVE* flags. + * A large folio can only be isolated from LRU once, + * but may be mapped by many PTEs (and Copy-On-Write may + * intersperse PTEs of other, order 0, folios). This is + * a common case, so don't mistake it for failure (but + * there can be other cases of multi-mapped pages which + * this quick check does not help to filter out - and a + * search of the pagelist might grow to be prohibitive). + * + * migrate_pages(&pagelist) returns nr_failed folios, so + * check "large" now so that queue_pages_range() returns + * a comparable nr_failed folios. This does imply that + * if folio could not be isolated for some racy reason + * at its first PTE, later PTEs will not give it another + * chance of isolation; but keeps the accounting simple. */ - if (!vma_migratable(vma)) - qp->has_unmovable = true; - - /* - * Do not abort immediately since there may be - * temporary off LRU pages in the range. Still - * need migrate other LRU pages. - */ - if (migrate_folio_add(folio, qp->pagelist, flags)) - qp->has_unmovable = true; - } else - break; + if (folio == qp->large) + continue; + qp->large = folio; + } + if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) || + !vma_migratable(vma) || + !migrate_folio_add(folio, qp->pagelist, flags)) { + qp->nr_failed++; + if (strictly_unmovable(flags)) + break; + } } pte_unmap_unlock(mapped_pte, ptl); cond_resched(); - - return addr != end ? -EIO : 0; +out: + if (qp->nr_failed && strictly_unmovable(flags)) + return -EIO; + return 0; } static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { - int ret = 0; #ifdef CONFIG_HUGETLB_PAGE struct queue_pages *qp = walk->private; - unsigned long flags = (qp->flags & MPOL_MF_VALID); + unsigned long flags = qp->flags; struct folio *folio; spinlock_t *ptl; pte_t entry; ptl = huge_pte_lock(hstate_vma(walk->vma), walk->mm, pte); entry = huge_ptep_get(pte); - if (!pte_present(entry)) + if (!pte_present(entry)) { + if (unlikely(is_hugetlb_entry_migration(entry))) + qp->nr_failed++; goto unlock; + } folio = pfn_folio(pte_pfn(entry)); if (!queue_folio_required(folio, qp)) goto unlock; - - if (flags == MPOL_MF_STRICT) { - /* - * STRICT alone means only detecting misplaced folio and no - * need to further check other vma. - */ - ret = -EIO; + if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) || + !vma_migratable(walk->vma)) { + qp->nr_failed++; goto unlock; } - - if (!vma_migratable(walk->vma)) { - /* - * Must be STRICT with MOVE*, otherwise .test_walk() have - * stopped walking current vma. - * Detecting misplaced folio but allow migrating folios which - * have been queued. - */ - qp->has_unmovable = true; - goto unlock; - } - /* - * With MPOL_MF_MOVE, we try to migrate only unshared folios. If it - * is shared it is likely not worth migrating. + * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio. + * Choosing not to migrate a shared folio is not counted as a failure. * * To check if the folio is shared, ideally we want to make sure * every page is mapped to the same process. Doing that is very - * expensive, so check the estimated mapcount of the folio instead. + * expensive, so check the estimated sharers of the folio instead. */ - if (flags & (MPOL_MF_MOVE_ALL) || - (flags & MPOL_MF_MOVE && folio_estimated_sharers(folio) == 1 && - !hugetlb_pmd_shared(pte))) { - if (!isolate_hugetlb(folio, qp->pagelist) && - (flags & MPOL_MF_STRICT)) - /* - * Failed to isolate folio but allow migrating pages - * which have been queued. - */ - qp->has_unmovable = true; - } + if ((flags & MPOL_MF_MOVE_ALL) || + (folio_estimated_sharers(folio) == 1 && !hugetlb_pmd_shared(pte))) + if (!isolate_hugetlb(folio, qp->pagelist)) + qp->nr_failed++; unlock: spin_unlock(ptl); -#else - BUG(); + if (qp->nr_failed && strictly_unmovable(flags)) + return -EIO; #endif - return ret; + return 0; } #ifdef CONFIG_NUMA_BALANCING @@ -704,8 +692,11 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, return 1; } - /* queue pages from current vma */ - if (flags & MPOL_MF_VALID) + /* + * Check page nodes, and queue pages to move, in the current vma. + * But if no moving, and no strict checking, the scan can be skipped. + */ + if (flags & (MPOL_MF_STRICT | MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) return 0; return 1; } @@ -727,22 +718,21 @@ static const struct mm_walk_ops queue_pages_lock_vma_walk_ops = { /* * Walk through page tables and collect pages to be migrated. * - * If pages found in a given range are on a set of nodes (determined by - * @nodes and @flags,) it's isolated and queued to the pagelist which is - * passed via @private. + * If pages found in a given range are not on the required set of @nodes, + * and migration is allowed, they are isolated and queued to @pagelist. * - * queue_pages_range() has three possible return values: - * 1 - there is unmovable page, but MPOL_MF_MOVE* & MPOL_MF_STRICT were - * specified. - * 0 - queue pages successfully or no misplaced page. - * errno - i.e. misplaced pages with MPOL_MF_STRICT specified (-EIO) or - * memory range specified by nodemask and maxnode points outside - * your accessible address space (-EFAULT) + * queue_pages_range() may return: + * 0 - all pages already on the right node, or successfully queued for moving + * (or neither strict checking nor moving requested: only range checking). + * >0 - this number of misplaced folios could not be queued for moving + * (a hugetlbfs page or a transparent huge page being counted as 1). + * -EIO - a misplaced page found, when MPOL_MF_STRICT specified without MOVEs. + * -EFAULT - a hole in the memory range, when MPOL_MF_DISCONTIG_OK unspecified. */ -static int +static long queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, nodemask_t *nodes, unsigned long flags, - struct list_head *pagelist, bool lock_vma) + struct list_head *pagelist) { int err; struct queue_pages qp = { @@ -752,20 +742,17 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, .start = start, .end = end, .first = NULL, - .has_unmovable = false, }; - const struct mm_walk_ops *ops = lock_vma ? + const struct mm_walk_ops *ops = (flags & MPOL_MF_WRLOCK) ? &queue_pages_lock_vma_walk_ops : &queue_pages_walk_ops; err = walk_page_range(mm, start, end, ops, &qp); - if (qp.has_unmovable) - err = 1; if (!qp.first) /* whole range in hole */ err = -EFAULT; - return err; + return err ? : qp.nr_failed; } /* @@ -1028,16 +1015,16 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, } #ifdef CONFIG_MIGRATION -static int migrate_folio_add(struct folio *folio, struct list_head *foliolist, +static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist, unsigned long flags) { /* - * We try to migrate only unshared folios. If it is shared it - * is likely not worth migrating. + * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio. + * Choosing not to migrate a shared folio is not counted as a failure. * * To check if the folio is shared, ideally we want to make sure * every page is mapped to the same process. Doing that is very - * expensive, so check the estimated mapcount of the folio instead. + * expensive, so check the estimated sharers of the folio instead. */ if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) == 1) { if (folio_isolate_lru(folio)) { @@ -1045,32 +1032,31 @@ static int migrate_folio_add(struct folio *folio, struct list_head *foliolist, node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), folio_nr_pages(folio)); - } else if (flags & MPOL_MF_STRICT) { + } else { /* * Non-movable folio may reach here. And, there may be * temporary off LRU folios or non-LRU movable folios. * Treat them as unmovable folios since they can't be - * isolated, so they can't be moved at the moment. It - * should return -EIO for this case too. + * isolated, so they can't be moved at the moment. */ - return -EIO; + return false; } } - - return 0; + return true; } /* * Migrate pages from one node to a target node. * Returns error or the number of pages not migrated. */ -static int migrate_to_node(struct mm_struct *mm, int source, int dest, - int flags) +static long migrate_to_node(struct mm_struct *mm, int source, int dest, + int flags) { nodemask_t nmask; struct vm_area_struct *vma; LIST_HEAD(pagelist); - int err = 0; + long nr_failed; + long err = 0; struct migration_target_control mtc = { .nid = dest, .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, @@ -1079,23 +1065,27 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, nodes_clear(nmask); node_set(source, nmask); - /* - * This does not "check" the range but isolates all pages that - * need migration. Between passing in the full user address - * space range and MPOL_MF_DISCONTIG_OK, this call can not fail. - */ - vma = find_vma(mm, 0); VM_BUG_ON(!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))); - queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask, - flags | MPOL_MF_DISCONTIG_OK, &pagelist, false); + vma = find_vma(mm, 0); + + /* + * This does not migrate the range, but isolates all pages that + * need migration. Between passing in the full user address + * space range and MPOL_MF_DISCONTIG_OK, this call cannot fail, + * but passes back the count of pages which could not be isolated. + */ + nr_failed = queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask, + flags | MPOL_MF_DISCONTIG_OK, &pagelist); if (!list_empty(&pagelist)) { err = migrate_pages(&pagelist, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, NULL); + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, NULL); if (err) putback_movable_pages(&pagelist); } + if (err >= 0) + err += nr_failed; return err; } @@ -1108,8 +1098,8 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, const nodemask_t *to, int flags) { - int busy = 0; - int err = 0; + long nr_failed = 0; + long err = 0; nodemask_t tmp; lru_cache_disable(); @@ -1191,7 +1181,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, node_clear(source, tmp); err = migrate_to_node(mm, source, dest, flags); if (err > 0) - busy += err; + nr_failed += err; if (err < 0) break; } @@ -1200,8 +1190,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, lru_cache_enable(); if (err < 0) return err; - return busy; - + return (nr_failed < INT_MAX) ? nr_failed : INT_MAX; } /* @@ -1240,10 +1229,10 @@ static struct folio *new_folio(struct folio *src, unsigned long start) } #else -static int migrate_folio_add(struct folio *folio, struct list_head *foliolist, +static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist, unsigned long flags) { - return -EIO; + return false; } int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, @@ -1267,8 +1256,8 @@ static long do_mbind(unsigned long start, unsigned long len, struct vma_iterator vmi; struct mempolicy *new; unsigned long end; - int err; - int ret; + long err; + long nr_failed; LIST_HEAD(pagelist); if (flags & ~(unsigned long)MPOL_MF_VALID) @@ -1308,10 +1297,8 @@ static long do_mbind(unsigned long start, unsigned long len, start, start + len, mode, mode_flags, nmask ? nodes_addr(*nmask)[0] : NUMA_NO_NODE); - if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { - + if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) lru_cache_disable(); - } { NODEMASK_SCRATCH(scratch); if (scratch) { @@ -1327,44 +1314,37 @@ static long do_mbind(unsigned long start, unsigned long len, goto mpol_out; /* - * Lock the VMAs before scanning for pages to migrate, to ensure we don't - * miss a concurrently inserted page. + * Lock the VMAs before scanning for pages to migrate, + * to ensure we don't miss a concurrently inserted page. */ - ret = queue_pages_range(mm, start, end, nmask, - flags | MPOL_MF_INVERT, &pagelist, true); + nr_failed = queue_pages_range(mm, start, end, nmask, + flags | MPOL_MF_INVERT | MPOL_MF_WRLOCK, &pagelist); - if (ret < 0) { - err = ret; - goto up_out; - } - - vma_iter_init(&vmi, mm, start); - prev = vma_prev(&vmi); - for_each_vma_range(vmi, vma, end) { - err = mbind_range(&vmi, vma, &prev, start, end, new); - if (err) - break; + if (nr_failed < 0) { + err = nr_failed; + } else { + vma_iter_init(&vmi, mm, start); + prev = vma_prev(&vmi); + for_each_vma_range(vmi, vma, end) { + err = mbind_range(&vmi, vma, &prev, start, end, new); + if (err) + break; + } } if (!err) { - int nr_failed = 0; - if (!list_empty(&pagelist)) { WARN_ON_ONCE(flags & MPOL_MF_LAZY); - nr_failed = migrate_pages(&pagelist, new_folio, NULL, + nr_failed |= migrate_pages(&pagelist, new_folio, NULL, start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL); - if (nr_failed) - putback_movable_pages(&pagelist); } - - if (((ret > 0) || nr_failed) && (flags & MPOL_MF_STRICT)) + if (nr_failed && (flags & MPOL_MF_STRICT)) err = -EIO; - } else { -up_out: - if (!list_empty(&pagelist)) - putback_movable_pages(&pagelist); } + if (!list_empty(&pagelist)) + putback_movable_pages(&pagelist); + mmap_write_unlock(mm); mpol_out: mpol_put(new); From patchwork Tue Oct 3 09:19:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147778 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1956102vqb; Tue, 3 Oct 2023 02:19:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFecaP0MeWbd5CBf0kuFbW/xGSa5ySkztBVPE9MT6vCKbVa5SdyYu/X/+WF1KRDCN33kv1y X-Received: by 2002:a17:90b:890:b0:276:785f:7edd with SMTP id bj16-20020a17090b089000b00276785f7eddmr12197540pjb.25.1696324766683; Tue, 03 Oct 2023 02:19:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696324766; cv=none; d=google.com; s=arc-20160816; b=ha7mA0PodwTg+9AFUJWwT+ZEFXnuCUdZngbV0Pip+2UDyqJwhf1yorX+arJvUtbVYz SzLx7tNJ+qyFKsusUTd2c65repU+WtOfXHinkg4ESiYOiW6xUHdo34oEUF+A17NuaY0Z M0Oo0Q/WnrmmMBzzf70CowxRbATfMh6tOMtIjvc0HzpfIiZ35HwKFXEC+2xF6gtC8t/h UZJNde3gcUhWWDO1+kZY+Ye256mfymP4OzSHqm77qv4odhvi9+rv2MQ36m4ud0K7eu9X R40X+AYfeYFfUiwDKxxPUo1SqGVUPD9nDEdXL/mCaCLeAEUY/VuKwqWrXKXylxQ9Cehv GirA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=LR8b7je7NOs49VXpiW5eNLJIFvtfHAwACqkCzwughIA=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=BILo0C2KF7u+wH8GK50m6GJzhpBVKDMIjBW8ZSSKtTA6IDG7HCi2/HNeaQh0OrjQLl MMJI+gOJ47M0p+AMIKRAfmnI0IqFX+qIoHlgX8vU9/3R+h5vmCSNdELSJPslrNg6PFFY GsYOjZtNpxS4tJwoY4T9bQLw/Q0m+BbEhOkaeGmqK+gzB7tG1iQvlbBs6frfC86E+Hbl yS1IF3xqz1RhJ/eKx2N1tUkxVy1tLhstK/7egnwliBriiSRKNZN4krXI1VOkH23CYycv Fm8jlg2UN3fEJuDHUQ4whISR3WlHIT8s7cV5ismjJQwKItV0466YqfGru6RglOR1S5oz Sc9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=AZoRDgPp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id t17-20020a17090b019100b002791a73ab07si1025277pjs.54.2023.10.03.02.19.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:19:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=AZoRDgPp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 99F5D802F6BE; Tue, 3 Oct 2023 02:19:22 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239560AbjJCJTI (ORCPT + 18 others); Tue, 3 Oct 2023 05:19:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231458AbjJCJTG (ORCPT ); Tue, 3 Oct 2023 05:19:06 -0400 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0316C90 for ; Tue, 3 Oct 2023 02:19:04 -0700 (PDT) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-5a2379a8b69so8495797b3.2 for ; Tue, 03 Oct 2023 02:19:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696324743; x=1696929543; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=LR8b7je7NOs49VXpiW5eNLJIFvtfHAwACqkCzwughIA=; b=AZoRDgPpSCA8aF2yMWzL8A4MDQsAX/kFpelTeHbZhPjWUriYD3cnMqsnWdq+8lsx8T jWTpEhiS4icYqfNLLR5ecmTvz3XBvpOYdeYjrMnrR90rkgjHNtV6o/+NwBfVu34g4Mwv KEpDK4ijZ/q5Ga7sTm8sG606i/rDYHNlgxipX9o0lBPk7A2OF2pyLi+DqSA5Z0OhfSSU RCTwKH6t9JqCApbO3rVGMoSD5aWsjQrKPbVZsNctlPeTIpgsJWd3Xv3jHBgsm6v5DfdH uCksOVvkfIkL2P1s1hc1stxwLn5wQ7qPudzOr58MFNRH+EaR7gfO45iEGXxFf1i6fNbJ aojQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696324743; x=1696929543; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LR8b7je7NOs49VXpiW5eNLJIFvtfHAwACqkCzwughIA=; b=FONY8kvH0EKb7+Yx8KOaMrDQpOzCuFcZWTMdCtXrchXzjISLdI8ROadMyUcN+A2x47 wid9/NeLZZBEYG4mlHjajAvg/8Rzt8qdZGrlHi8r05cJoWO1RvTy2tdHCcwEkye77yI1 /IQ6e+toa9pfBw2gbOI0qVMz73U7WUaozB+oe+EO06Nd8OalF6kgwMe0CxBERq94BYl7 gGifGnYMqM5mGejKJ6wQ19Qk8uSSckEVf4kAqsvOFuMjeD/I7uxW0mKeQRD2jFPldczC uXERFAvtIiQM/EoH+F2cd6xMySZ1+wi/N2f4z9urtzkv6y539QrjITNwXJLDdFGFnEUS cvdQ== X-Gm-Message-State: AOJu0YybZgLxA94OmrTvqW+8D89+tY+gn+FyzDgoNImovoMWf6cae++n XjQHJ5VelltolNXL/kBxGIwTcQ== X-Received: by 2002:a81:a212:0:b0:586:a170:7dbe with SMTP id w18-20020a81a212000000b00586a1707dbemr15010102ywg.13.1696324743080; Tue, 03 Oct 2023 02:19:03 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id j13-20020a81920d000000b0059f61be458esm245153ywg.82.2023.10.03.02.19.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:19:02 -0700 (PDT) Date: Tue, 3 Oct 2023 02:19:00 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 04/12] mempolicy trivia: delete those ancient pr_debug()s In-Reply-To: Message-ID: References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:19:22 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725438443024414 X-GMAIL-MSGID: 1778725438443024414 Delete those ancient pr_debug()s - PDprintk()s in Andi Kleen's original submission of core NUMA API, and useful when debugging shared mempolicy lifetime back then, but not used recently. Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) --- mm/mempolicy.c | 21 --------------------- 1 file changed, 21 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 752d880dcdf8..780498662b75 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -264,9 +264,6 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, { struct mempolicy *policy; - pr_debug("setting mode %d flags %d nodes[0] %lx\n", - mode, flags, nodes ? nodes_addr(*nodes)[0] : NUMA_NO_NODE); - if (mode == MPOL_DEFAULT) { if (nodes && !nodes_empty(*nodes)) return ERR_PTR(-EINVAL); @@ -768,11 +765,6 @@ static int vma_replace_policy(struct vm_area_struct *vma, vma_assert_write_locked(vma); - pr_debug("vma %lx-%lx/%lx vm_ops %p vm_file %p set_policy %p\n", - vma->vm_start, vma->vm_end, vma->vm_pgoff, - vma->vm_ops, vma->vm_file, - vma->vm_ops ? vma->vm_ops->set_policy : NULL); - new = mpol_dup(pol); if (IS_ERR(new)) return PTR_ERR(new); @@ -1293,10 +1285,6 @@ static long do_mbind(unsigned long start, unsigned long len, if (!new) flags |= MPOL_MF_DISCONTIG_OK; - pr_debug("mbind %lx-%lx mode:%d flags:%d nodes:%lx\n", - start, start + len, mode, mode_flags, - nmask ? nodes_addr(*nmask)[0] : NUMA_NO_NODE); - if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) lru_cache_disable(); { @@ -2516,8 +2504,6 @@ static void sp_insert(struct shared_policy *sp, struct sp_node *new) } rb_link_node(&new->nd, parent, p); rb_insert_color(&new->nd, &sp->root); - pr_debug("inserting %lx-%lx: %d\n", new->start, new->end, - new->policy ? new->policy->mode : 0); } /* Find shared policy intersecting idx */ @@ -2656,7 +2642,6 @@ void mpol_put_task_policy(struct task_struct *task) static void sp_delete(struct shared_policy *sp, struct sp_node *n) { - pr_debug("deleting %lx-l%lx\n", n->start, n->end); rb_erase(&n->nd, &sp->root); sp_free(n); } @@ -2813,12 +2798,6 @@ int mpol_set_shared_policy(struct shared_policy *info, struct sp_node *new = NULL; unsigned long sz = vma_pages(vma); - pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", - vma->vm_pgoff, - sz, npol ? npol->mode : -1, - npol ? npol->flags : -1, - npol ? nodes_addr(npol->nodes)[0] : NUMA_NO_NODE); - if (npol) { new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol); if (!new) From patchwork Tue Oct 3 09:20:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147780 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1956577vqb; Tue, 3 Oct 2023 02:20:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFGAmAsM8GGVQgVWDw4y5el/9JgpYFhC4oxYnpgW3chaWZFo4gSNVP7sd9J8QAoLFHsftwg X-Received: by 2002:a1f:dfc1:0:b0:49a:56d2:562d with SMTP id w184-20020a1fdfc1000000b0049a56d2562dmr11329311vkg.4.1696324843646; Tue, 03 Oct 2023 02:20:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696324843; cv=none; d=google.com; s=arc-20160816; b=fNZA+Qs8x846M4DIpDrjSPAAIcM/oQzTstS/3ilfPJoAaB0Th+3V677kSqTITDScSF Qs4mSXCV38zOyJC14TdAcAQW6I3QcIu9Cz7fd9e8AAsdDKkkBAAYdiJxKEjnRALgWCTa FduA53w1kugU/nAEhhqiC0tAy+Dzs+APHH7NSX6Q5EiitgYaIbPEByuXNy2sCsQaZmwv rPxBtWC5+8EUaUpymH0xHIf/XmfS1a4BfJA+cdgr0IGfYmYKrp6+kv6vy9U6vFhFbfz7 QdTYVZiWAPPvu+aUJjc1IkkSP5S+lceas5TbpBzN7u5ca4NlmbrxbFtkEjzBIz9Z7Mut LhmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=WpSDmja3k+xJDYdjjbygGD+ZOvEonrCZy7oEBpIDr3Y=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=LlacreFp33/Q/5YtmXA82u+y+dgjkwIB1NrDQieUm3nH/CKVB1vStxPKLFoQLKX0vM 4Tb34S/dX5LVKi+tBc1GV2TTJjPpoJ/EqIo9K0AI7DAfF+MNqFvj47L6EqkDhjsHuKq4 DRsh0aiU3WIUdJzJ/tRUz9QdoYjqA3wUjAxhJAyUDKyh1tW4P9hg9gAakufkk4TfeHTM LvtJfZvmSZ8aG4IwjcYfkKTxah8RGeKhbtWGnnKRcUaP4PkxS3QeUok9RVyTCpdlV1Kj Ms/SFwIVqf1vClhAXZbW4KfpT8ty0bUPpPpYH96ABd9uqy07P7iWE16MFR36lfvUIgnS Z2Mw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=2qkyGMP2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id x13-20020a170902ec8d00b001c71f14ba7fsi1099193plg.247.2023.10.03.02.20.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:20:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=2qkyGMP2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id E4F7D8028B5E; Tue, 3 Oct 2023 02:20:40 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239596AbjJCJUZ (ORCPT + 18 others); Tue, 3 Oct 2023 05:20:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231506AbjJCJUX (ORCPT ); Tue, 3 Oct 2023 05:20:23 -0400 Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB36EB8 for ; Tue, 3 Oct 2023 02:20:18 -0700 (PDT) Received: by mail-yb1-xb32.google.com with SMTP id 3f1490d57ef6-d8673a90f56so730877276.0 for ; Tue, 03 Oct 2023 02:20:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696324818; x=1696929618; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=WpSDmja3k+xJDYdjjbygGD+ZOvEonrCZy7oEBpIDr3Y=; b=2qkyGMP2rfn/rqi5B3pyn9C3CPURYD7MBSYtcTBwbDYvOPCKSooeOd63M2MY/H+jW6 optuBF+duQhiCZkXct+LdijUzQEXb+4O8jRT0Nk7NGo3Oov5kceHAeJ8cM775/eRk9wz CWpAcJJEQ5mG5uGs6AFBIDCbIN063iecVcWs6rhtsiYnt3L/Se+Aa1koYU7mpe9iQJ+X tg0LnIWET3RoL22fI9Qib2L3fTJH077mmsmNDWHks1qSPBL13VMH5QzlNZR3ufTCTIy0 4qA0xBmDxdM4dC2Mk0QYZllBmKsB6XnG0RhA8asN+jS8Bx64k0NcYV75G+TW4JeKsrkv 30VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696324818; x=1696929618; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WpSDmja3k+xJDYdjjbygGD+ZOvEonrCZy7oEBpIDr3Y=; b=DpOJYSAYUD7d9FRTnnFf4HnE8PRarCQSXTNGaQ4juKrWfmvw9sH9JnyB/sRuHCXzLa 6Oa1hNOyBKV6cU+Vv9rjYijNbNY2MCFI6VHNYVBFi1WyaJRUc1nz4np0WFeOe1HUHoiW 3AYy1wvmzCqDGomssJzORYMyWEgTJ1F2aY601Q3tglFtn+/6wNweS4dGzb5lCfhIJ9Mw 1lXgQ39av+S4iWDDJYkavsXXftXeuCABy29JjlX9sSsT+wUUX4tOTxWHgHcQNvL2FiqI PF2eRQPdWVbz+uOqVRYKxVllAa+PQFXuyvWSLeEjuC4llYU7eLGT1x1Vgzc/K22f5YYD ALpw== X-Gm-Message-State: AOJu0YyCeNBk5oGMTsLoEpJxXs71cy9pHquciNaaHMptaDjGIbEKrmc1 AbE9bpVC+Fz+zfGLujfqMaXUwg== X-Received: by 2002:a25:50ce:0:b0:d90:b45d:6e6a with SMTP id e197-20020a2550ce000000b00d90b45d6e6amr5801772ybb.2.1696324817790; Tue, 03 Oct 2023 02:20:17 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id j2-20020a258b82000000b00d8674371317sm279506ybl.36.2023.10.03.02.20.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:20:17 -0700 (PDT) Date: Tue, 3 Oct 2023 02:20:14 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 05/12] mempolicy trivia: slightly more consistent naming In-Reply-To: Message-ID: <68287974-b6ae-7df-4ba-d19ddd69cbf@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:20:40 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725519472190429 X-GMAIL-MSGID: 1778725519472190429 Before getting down to work, do a little cleanup, mainly of inconsistent variable naming. I gave up trying to rationalize mpol versus pol versus policy, and node versus nid, but let's avoid p and nd. Remove a few superfluous blank lines, but add one; and here prefer vma->vm_policy to vma_policy(vma) - the latter being appropriate in other sources, which have to allow for !CONFIG_NUMA. That intriguing line about KERNEL_DS? should have gone in v2.6.15, when numa_policy_init() stopped using set_mempolicy(2)'s system call handler. Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) --- include/linux/mempolicy.h | 11 +++---- mm/mempolicy.c | 73 +++++++++++++++++++---------------------- 2 files changed, 38 insertions(+), 46 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 6c2754d7bfed..325b7200c311 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -126,10 +126,9 @@ struct shared_policy { int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol); -int mpol_set_shared_policy(struct shared_policy *info, - struct vm_area_struct *vma, - struct mempolicy *new); -void mpol_free_shared_policy(struct shared_policy *p); +int mpol_set_shared_policy(struct shared_policy *sp, + struct vm_area_struct *vma, struct mempolicy *mpol); +void mpol_free_shared_policy(struct shared_policy *sp); struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx); @@ -193,7 +192,7 @@ static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b) return true; } -static inline void mpol_put(struct mempolicy *p) +static inline void mpol_put(struct mempolicy *pol) { } @@ -212,7 +211,7 @@ static inline void mpol_shared_policy_init(struct shared_policy *sp, { } -static inline void mpol_free_shared_policy(struct shared_policy *p) +static inline void mpol_free_shared_policy(struct shared_policy *sp) { } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 780498662b75..c7906a034959 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -25,7 +25,7 @@ * to the last. It would be better if bind would truly restrict * the allocation to memory nodes instead * - * preferred Try a specific node first before normal fallback. + * preferred Try a specific node first before normal fallback. * As a special case NUMA_NO_NODE here means do the allocation * on the local CPU. This is normally identical to default, * but useful to set in a VMA when you have a non default @@ -52,7 +52,7 @@ * on systems with highmem kernel lowmem allocation don't get policied. * Same with GFP_DMA allocations. * - * For shmfs/tmpfs/hugetlbfs shared memory the policy is shared between + * For shmem/tmpfs shared memory the policy is shared between * all users and remembered even when nobody has memory mapped. */ @@ -291,6 +291,7 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, return ERR_PTR(-EINVAL); } else if (nodes_empty(*nodes)) return ERR_PTR(-EINVAL); + policy = kmem_cache_alloc(policy_cache, GFP_KERNEL); if (!policy) return ERR_PTR(-ENOMEM); @@ -303,11 +304,11 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, } /* Slow path of a mpol destructor. */ -void __mpol_put(struct mempolicy *p) +void __mpol_put(struct mempolicy *pol) { - if (!atomic_dec_and_test(&p->refcnt)) + if (!atomic_dec_and_test(&pol->refcnt)) return; - kmem_cache_free(policy_cache, p); + kmem_cache_free(policy_cache, pol); } static void mpol_rebind_default(struct mempolicy *pol, const nodemask_t *nodes) @@ -364,7 +365,6 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask) * * Called with task's alloc_lock held. */ - void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new) { mpol_rebind_policy(tsk->mempolicy, new); @@ -375,7 +375,6 @@ void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new) * * Call holding a reference to mm. Takes mm->mmap_lock during call. */ - void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) { struct vm_area_struct *vma; @@ -757,7 +756,7 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, * This must be called with the mmap_lock held for writing. */ static int vma_replace_policy(struct vm_area_struct *vma, - struct mempolicy *pol) + struct mempolicy *pol) { int err; struct mempolicy *old; @@ -803,7 +802,7 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma, vmstart = vma->vm_start; } - if (mpol_equal(vma_policy(vma), new_pol)) { + if (mpol_equal(vma->vm_policy, new_pol)) { *prev = vma; return 0; } @@ -875,18 +874,18 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, * * Called with task's alloc_lock held */ -static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) +static void get_policy_nodemask(struct mempolicy *pol, nodemask_t *nodes) { nodes_clear(*nodes); - if (p == &default_policy) + if (pol == &default_policy) return; - switch (p->mode) { + switch (pol->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: case MPOL_PREFERRED: case MPOL_PREFERRED_MANY: - *nodes = p->nodes; + *nodes = pol->nodes; break; case MPOL_LOCAL: /* return empty node mask for local allocation */ @@ -1654,7 +1653,6 @@ static int kernel_migrate_pages(pid_t pid, unsigned long maxnode, out_put: put_task_struct(task); goto out; - } SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode, @@ -1664,7 +1662,6 @@ SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode, return kernel_migrate_pages(pid, maxnode, old_nodes, new_nodes); } - /* Retrieve NUMA policy */ static int kernel_get_mempolicy(int __user *policy, unsigned long __user *nmask, @@ -1847,10 +1844,10 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) * policy_node() is always coupled with policy_nodemask(), which * secures the nodemask limit for 'bind' and 'prefer-many' policy. */ -static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) +static int policy_node(gfp_t gfp, struct mempolicy *policy, int nid) { if (policy->mode == MPOL_PREFERRED) { - nd = first_node(policy->nodes); + nid = first_node(policy->nodes); } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy @@ -1865,19 +1862,18 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) policy->home_node != NUMA_NO_NODE) return policy->home_node; - return nd; + return nid; } /* Do dynamic interleaving for a process */ -static unsigned interleave_nodes(struct mempolicy *policy) +static unsigned int interleave_nodes(struct mempolicy *policy) { - unsigned next; - struct task_struct *me = current; + unsigned int nid; - next = next_node_in(me->il_prev, policy->nodes); - if (next < MAX_NUMNODES) - me->il_prev = next; - return next; + nid = next_node_in(current->il_prev, policy->nodes); + if (nid < MAX_NUMNODES) + current->il_prev = nid; + return nid; } /* @@ -2367,7 +2363,7 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) { - struct mempolicy *pol = mpol_dup(vma_policy(src)); + struct mempolicy *pol = mpol_dup(src->vm_policy); if (IS_ERR(pol)) return PTR_ERR(pol); @@ -2791,40 +2787,40 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) } } -int mpol_set_shared_policy(struct shared_policy *info, - struct vm_area_struct *vma, struct mempolicy *npol) +int mpol_set_shared_policy(struct shared_policy *sp, + struct vm_area_struct *vma, struct mempolicy *pol) { int err; struct sp_node *new = NULL; unsigned long sz = vma_pages(vma); - if (npol) { - new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol); + if (pol) { + new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, pol); if (!new) return -ENOMEM; } - err = shared_policy_replace(info, vma->vm_pgoff, vma->vm_pgoff+sz, new); + err = shared_policy_replace(sp, vma->vm_pgoff, vma->vm_pgoff + sz, new); if (err && new) sp_free(new); return err; } /* Free a backing policy store on inode delete. */ -void mpol_free_shared_policy(struct shared_policy *p) +void mpol_free_shared_policy(struct shared_policy *sp) { struct sp_node *n; struct rb_node *next; - if (!p->root.rb_node) + if (!sp->root.rb_node) return; - write_lock(&p->lock); - next = rb_first(&p->root); + write_lock(&sp->lock); + next = rb_first(&sp->root); while (next) { n = rb_entry(next, struct sp_node, nd); next = rb_next(&n->nd); - sp_delete(p, n); + sp_delete(sp, n); } - write_unlock(&p->lock); + write_unlock(&sp->lock); } #ifdef CONFIG_NUMA_BALANCING @@ -2874,7 +2870,6 @@ static inline void __init check_numabalancing_enable(void) } #endif /* CONFIG_NUMA_BALANCING */ -/* assumes fs == KERNEL_DS */ void __init numa_policy_init(void) { nodemask_t interleave_nodes; @@ -2937,7 +2932,6 @@ void numa_default_policy(void) /* * Parse and format mempolicy from/to strings */ - static const char * const policy_modes[] = { [MPOL_DEFAULT] = "default", @@ -2948,7 +2942,6 @@ static const char * const policy_modes[] = [MPOL_PREFERRED_MANY] = "prefer (many)", }; - #ifdef CONFIG_TMPFS /** * mpol_parse_str - parse string to mempolicy, for tmpfs mpol mount option. From patchwork Tue Oct 3 09:21:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147781 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1957149vqb; Tue, 3 Oct 2023 02:21:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEms5xLhc03xoj9OqLsZEkSFiaZowQ5WWCE3xgOZTIYY767Qx08ovj3JbKknZHLX9ueKAzv X-Received: by 2002:a05:6a20:7f84:b0:15d:3a10:18c6 with SMTP id d4-20020a056a207f8400b0015d3a1018c6mr12251127pzj.45.1696324916328; Tue, 03 Oct 2023 02:21:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696324916; cv=none; d=google.com; s=arc-20160816; b=xntsA9kWysoayBivbwFvjCy0NYjU6/dKyXQgnw/nLXCH9DhlJ/4xXmG6ynv514xwhh cCMqu8fsJUBJA8nBsRlo4QWwj8NYj4SjZ9ueoCam5pA8+hatJQdIWmpYHOhIo2FcuN3x c51r9lofKzrOTHMI0A/1j68Dg6gm1RInB/OAXwrx0b0P7eHxzZpNwPJyJdJR0rHAQTz6 iDfGm6kI6P+H2lBbjY7a6stvjzIkNFm1zXEzlMgu5WU7o/e/5rM8ZP5S21+j8/0BzFx0 AoXpWCUnYUS3T+6u0fVePeGyMoIgtEJ4ozZfTPIPdbL9CLk7ihI7lqtSRk0HvAJGhXXB 4KMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=TKEb9Cd3xCSrgbL40B7c6RQSPbdbractxdPzCveSCj4=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=L8zsGXoopC/ETwFZp5dYHL+k/wqA3re+dRxNAvubesjvCqOZlvmMWzidTuAgWTf9yP dLLe47xSpDGcaFhktet25SwV0vB+l5J2dlxuI0nh3x++Dwlz9mo62rb+aRLXKAfRCayF 9CmQ0gZFi8t55tH+vDm9aYE3k5GIc7+Co8i9/EOJG7Bssz14rkvv/h5sBw4boIq6GwbR jd+Xu2npL+yUlILf3PY3xpCj5+Pwv13BJUkz5zsIoE2FOqrOoaZ0eLt3Xo8y4MQ53kkP /z1dpVLy7BfYCmHlSC72Wjs+nvu6NBMfnbjSbAWtgjsq1BKbF8w8oX3CzAwbUnFkOlWX N6tA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=wVEBzCa6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id w10-20020a170902e88a00b001c613b5e778si1092329plg.557.2023.10.03.02.21.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:21:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=wVEBzCa6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 40293810498E; Tue, 3 Oct 2023 02:21:54 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239596AbjJCJVm (ORCPT + 18 others); Tue, 3 Oct 2023 05:21:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231509AbjJCJVl (ORCPT ); Tue, 3 Oct 2023 05:21:41 -0400 Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com [IPv6:2607:f8b0:4864:20::72c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0652690 for ; Tue, 3 Oct 2023 02:21:38 -0700 (PDT) Received: by mail-qk1-x72c.google.com with SMTP id af79cd13be357-77410032cedso54870485a.1 for ; Tue, 03 Oct 2023 02:21:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696324897; x=1696929697; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=TKEb9Cd3xCSrgbL40B7c6RQSPbdbractxdPzCveSCj4=; b=wVEBzCa6TWI1w/l66MYDermvG1lhghYe0MvLeHM41NvjIQ869kXawkgSADOZQI10SQ 1C6MEI3fRrJo+E/zuB5iY8EzzBicy+bLy6GKsPewlj86KZQVqi1lswt8kKiS+6/3NRR3 FoiADPOGAuC5suC+uoeMA+uISKXTtS95rx2eNNZwxVE3ON/Hrq5l7hVsKOZAB1qHrJ43 VyWWb5c60RL5oGP6Mvx2ARLd8MG4KQbxlUc2Urx1xJpA1aTIzmh65d79tG2o41DXQY4H cMvv3lgr0PwSpSAD4mgICRiAq7dUIjFLM5cdszkGn5lp8/lQOZXkypF8kZh1JtKahIm/ /SmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696324897; x=1696929697; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TKEb9Cd3xCSrgbL40B7c6RQSPbdbractxdPzCveSCj4=; b=AWTM55ZZsAIOkVTl9F3nCSatL95WLXjUzpXRUenCy0oEnwuO6+1IkZrpdlojGdJSAQ qy3gXicIv1qGogpVDCv1r4hdUszLWjxe/2imS84vB1RLnm0FSCIu97G54u51nSjXXeQ3 xIkAan5LNRQLiwJrwMwuKanfYf02Yf1nwYqGUIxBQKB4x2H0tYncfgEoCvc2QP3Zee46 i54hMzomDGXu321t9KZnMNHwdZbY12ZXbC3Buy/OQz97yLAkkw0hGTnZ/rZnTOe8GKhF vfRubRAGwCmCr0JsHKPMagGNAi1hkDexWVU0Oh6fXCe9Iv6dPbH41lZyUO53ZIJDkazg ZF3A== X-Gm-Message-State: AOJu0Yy17ryH3Mu39aUSTFlXlMHHrJDkCGgLQNhMmL4GaAv1Bpxc1FAF sS6+NUZ5u1iaU7wqD2joxGUCwA== X-Received: by 2002:a05:620a:95c:b0:774:1d91:e41a with SMTP id w28-20020a05620a095c00b007741d91e41amr13254046qkw.77.1696324897011; Tue, 03 Oct 2023 02:21:37 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id a14-20020a81bb4e000000b0059be6a5fcffsm247867ywl.44.2023.10.03.02.21.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:21:36 -0700 (PDT) Date: Tue, 3 Oct 2023 02:21:34 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 06/12] mempolicy trivia: use pgoff_t in shared mempolicy tree In-Reply-To: Message-ID: <5451157-3818-4af5-fd2c-5d26a5d1dc53@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:21:54 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725595863140592 X-GMAIL-MSGID: 1778725595863140592 Prefer the more explicit "pgoff_t" to "unsigned long" when dealing with a shared mempolicy tree. Delete confusing comment about pseudo mm vmas. Signed-off-by: Hugh Dickins --- include/linux/mempolicy.h | 20 +++++++------------- mm/mempolicy.c | 12 ++++++------ 2 files changed, 13 insertions(+), 19 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 325b7200c311..c69f9480d5e4 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -107,22 +107,16 @@ static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b) /* * Tree of shared policies for a shared memory region. - * Maintain the policies in a pseudo mm that contains vmas. The vmas - * carry the policy. As a special twist the pseudo mm is indexed in pages, not - * bytes, so that we can work with shared memory segments bigger than - * unsigned long. */ - -struct sp_node { - struct rb_node nd; - unsigned long start, end; - struct mempolicy *policy; -}; - struct shared_policy { struct rb_root root; rwlock_t lock; }; +struct sp_node { + struct rb_node nd; + pgoff_t start, end; + struct mempolicy *policy; +}; int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol); @@ -130,7 +124,7 @@ int mpol_set_shared_policy(struct shared_policy *sp, struct vm_area_struct *vma, struct mempolicy *mpol); void mpol_free_shared_policy(struct shared_policy *sp); struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp, - unsigned long idx); + pgoff_t idx); struct mempolicy *get_task_policy(struct task_struct *p); struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, @@ -216,7 +210,7 @@ static inline void mpol_free_shared_policy(struct shared_policy *sp) } static inline struct mempolicy * -mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx) +mpol_shared_policy_lookup(struct shared_policy *sp, pgoff_t idx) { return NULL; } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index c7906a034959..1d3f9e1ecbb8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2448,8 +2448,8 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) * lookup first element intersecting start-end. Caller holds sp->lock for * reading or for writing */ -static struct sp_node * -sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end) +static struct sp_node *sp_lookup(struct shared_policy *sp, + pgoff_t start, pgoff_t end) { struct rb_node *n = sp->root.rb_node; @@ -2503,8 +2503,8 @@ static void sp_insert(struct shared_policy *sp, struct sp_node *new) } /* Find shared policy intersecting idx */ -struct mempolicy * -mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx) +struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp, + pgoff_t idx) { struct mempolicy *pol = NULL; struct sp_node *sn; @@ -2672,8 +2672,8 @@ static struct sp_node *sp_alloc(unsigned long start, unsigned long end, } /* Replace a policy range. */ -static int shared_policy_replace(struct shared_policy *sp, unsigned long start, - unsigned long end, struct sp_node *new) +static int shared_policy_replace(struct shared_policy *sp, pgoff_t start, + pgoff_t end, struct sp_node *new) { struct sp_node *n; struct sp_node *n_new = NULL; From patchwork Tue Oct 3 09:22:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147782 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1957778vqb; Tue, 3 Oct 2023 02:23:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE30NjGtchonbN3KsT8A+n7JxiPEpuYHPVVkpQ0dT9eFDA6JlhZigVc/MWnDbzxZtW2rYa8 X-Received: by 2002:a17:903:120b:b0:1c6:d1f:514d with SMTP id l11-20020a170903120b00b001c60d1f514dmr13928204plh.45.1696325015647; Tue, 03 Oct 2023 02:23:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696325015; cv=none; d=google.com; s=arc-20160816; b=DJnqCZY88Wy2uvjyeogl49U16iQiKvm6NG7Zd407uOLEA/Ffarupc0PjJ73zw0NaVn BeeLU2FbOwhicYPzXJ18MErBEp4qkhvGYotrkvEemqv3W/eaVTgQASzsWPclslHh2u/w yUtWXB9KGtdXKJBXUd6rtxi6lvWKrZXtrBhXOIqBRN/jnUvN1P8OiuHQKYLZWjwnLcoP Ya4NTF8dvn4w82ZnsTlqZu6fAG/+Cc8+Aeq0b+JZPp3FUivu2fXFFMRBzq7eK0sqDlJu yMZPi2mXs80iwzSFfyDEX4xxtKfrUZGGPKlhoUwUa0tIg+rNP/qdde6GPE9ya/F3GjL/ vuRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=Mzr/4uL62aOMXHcIAqfjDQcxRGe9jlWGjvKm9366txw=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=SrtzGMZPynyQTa+yYltNO+Ao2PvdfZjsRlRkUCGOOIluEcyMTd4yqp+/WR7FoF6OY6 aHifcjKWc/8HkhYnF1bE4+jung13qwg1VHH8viVOtLy6OSVvm+7VPyP2sMxAmj1ZxhZ4 9wtnGtm0m5AJch71J2EUnP55V3xApgrX14U4HPqc6wlwmI+j2J+r/9xB65i3NejmRKnz bNQqaW9c6yEGB36tx8CSzv1QPU5PxM3c8xObcxhrp1Kqvk2WDlLjB03I4SBT6uRUfRnA vowQBLMee0ALd2kc06UmdTs9392K7+dGgEQv4ptWQlAyxo062mcNx7BhyI9AOG/WKGpV dD3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=u2VvcG01; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id cp1-20020a170902e78100b001c6285295absi965962plb.514.2023.10.03.02.23.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:23:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=u2VvcG01; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 957A5810499D; Tue, 3 Oct 2023 02:23:33 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239634AbjJCJXK (ORCPT + 18 others); Tue, 3 Oct 2023 05:23:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239624AbjJCJXG (ORCPT ); Tue, 3 Oct 2023 05:23:06 -0400 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 585FFB7 for ; Tue, 3 Oct 2023 02:23:03 -0700 (PDT) Received: by mail-yb1-xb33.google.com with SMTP id 3f1490d57ef6-d865c441a54so734843276.1 for ; Tue, 03 Oct 2023 02:23:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696324982; x=1696929782; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=Mzr/4uL62aOMXHcIAqfjDQcxRGe9jlWGjvKm9366txw=; b=u2VvcG01nUNwn5OWn7Il/uGa0HHiBypHthBaYP9WNSd552T6Q47T/pFLnvGuvLBWyz ShrNGkvODTNH7RDHrhX+xMp7/F0qvGwVnEgQ3r8fFkVlMjOBjN7qeNK0/jAOr8yJ6ufj FApIJ6NlDMs3KTb0QHPon+puNNkHyhEV/ARKT8VaMF+kwxNFjkIXP0v1Q57VO/xaLodP gsHrHw4XYE3KBWCLaemYNGH0fKa2TcVXpDtebWqvy6FWYYqUZPKByaE9Ptd/fDAQx3XJ aTs2W6N09uSBPnwP8hr1WabY9WYbM/zhNoqrfaMdm6gU7ALr00SMjalyQ8Y4FkiSg2RM zI/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696324982; x=1696929782; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Mzr/4uL62aOMXHcIAqfjDQcxRGe9jlWGjvKm9366txw=; b=esKIiP/hlP7YeSXPrhPBoIx6wwvoYBnzL/RxNFak9qc537YejnJM1LmJHjarFOIoA2 zp81I5urbhlE1KJIdKM8ECMuxjRAqgAfdKUhGoOq5T1D/Q1l080Zs1HBwmoASFuWo/BU p5Ht/3dEJZPYhNKOAGZvu6VvL1oxev7U+SCcaqXIe04vMvg2lfFKY0DrtLtCO9B0Z4hC g3+EK6+WzELlLE3p63XAWOHEzyIlvxWVwmc/xX8VQ9IZ1r5N2zO17DaCqhOqEJhzPWNu RiqQEelHYWls3iVvdH5+D588r3yVUxHp7bUx0D0O/Kcor7S+cIuVbvcArjYETOGQU1/I IkZA== X-Gm-Message-State: AOJu0YwGL5qBRhrqwCAxiEf3oU1YEvRomK4k6iqH5psv10UNQ4S2YyYT D7CqHJOlfzEwXalv7wkGMCWKDw== X-Received: by 2002:a5b:c89:0:b0:d81:599f:a538 with SMTP id i9-20020a5b0c89000000b00d81599fa538mr10809754ybq.51.1696324982333; Tue, 03 Oct 2023 02:23:02 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id c6-20020a259c06000000b00d816fa23bd4sm280649ybo.26.2023.10.03.02.23.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:23:01 -0700 (PDT) Date: Tue, 3 Oct 2023 02:22:59 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 07/12] mempolicy: mpol_shared_policy_init() without pseudo-vma In-Reply-To: Message-ID: <3bef62d8-ae78-4c2-533-56a44ae425c@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:23:33 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725699965040548 X-GMAIL-MSGID: 1778725699965040548 mpol_shared_policy_init() does not need to use a pseudo-vma: it can use sp_alloc() and sp_insert() directly, since the object's shared policy tree is empty and inaccessible (needing no lock) at get_inode() time. Signed-off-by: Hugh Dickins --- mm/mempolicy.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1d3f9e1ecbb8..5d99fd5cd60b 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2756,30 +2756,30 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) rwlock_init(&sp->lock); if (mpol) { - struct vm_area_struct pvma; - struct mempolicy *new; + struct sp_node *sn; + struct mempolicy *npol; NODEMASK_SCRATCH(scratch); if (!scratch) goto put_mpol; - /* contextualize the tmpfs mount point mempolicy */ - new = mpol_new(mpol->mode, mpol->flags, &mpol->w.user_nodemask); - if (IS_ERR(new)) + + /* contextualize the tmpfs mount point mempolicy to this file */ + npol = mpol_new(mpol->mode, mpol->flags, &mpol->w.user_nodemask); + if (IS_ERR(npol)) goto free_scratch; /* no valid nodemask intersection */ task_lock(current); - ret = mpol_set_nodemask(new, &mpol->w.user_nodemask, scratch); + ret = mpol_set_nodemask(npol, &mpol->w.user_nodemask, scratch); task_unlock(current); if (ret) - goto put_new; + goto put_npol; - /* Create pseudo-vma that contains just the policy */ - vma_init(&pvma, NULL); - pvma.vm_end = TASK_SIZE; /* policy covers entire file */ - mpol_set_shared_policy(sp, &pvma, new); /* adds ref */ - -put_new: - mpol_put(new); /* drop initial ref */ + /* alloc node covering entire file; adds ref to file's npol */ + sn = sp_alloc(0, MAX_LFS_FILESIZE >> PAGE_SHIFT, npol); + if (sn) + sp_insert(sp, sn); +put_npol: + mpol_put(npol); /* drop initial ref on file's npol */ free_scratch: NODEMASK_SCRATCH_FREE(scratch); put_mpol: From patchwork Tue Oct 3 09:24:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147785 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1958126vqb; Tue, 3 Oct 2023 02:24:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH+VOZjrM4x4zr16imAg2hTT5FOF1qK1Ko5VnsBN8tkqJupBIu7t5DGItLiUlxrSunfmN0g X-Received: by 2002:a05:6a21:7785:b0:153:919e:18ce with SMTP id bd5-20020a056a21778500b00153919e18cemr13366362pzc.48.1696325078143; Tue, 03 Oct 2023 02:24:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696325078; cv=none; d=google.com; s=arc-20160816; b=QcCHMPL7rd++fvbsV9hxqAgISiwUmKcnSx98Es6E9m7VNfgUuvEuhZlEZ+OJaPg2ZE xiZkVAt+nj9f7V5mDDYln/anAMBfJrW2lLzTgVD+u+u3dbmFet4avYoG44+6MIGOsiha ctGBjI5GYXxMTfZFocp18XvXhHehI3zQAFSoTbGDrNMQWL20YbNLYDxgEZK6iz6hu+UW IGwB3IBP+c8wrJUcKXAkFZn69AVuoVlFHADcj+mFdg3rJALe0BCIZNE9Am8luqsQz8OE 8EXxb9HHCk9Z8KymU+S5u0cpobeTlqgiZT24FiLj76MAtrIXJg29FIG/FJdI3N8H/24y yTvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=ooRbfvgflrDJY0S/ihm6kiwAA3KtmVPaxx1CoMlS90Q=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=SzrvrNUxMtD1PXGHLTZuJu+sHiFwaNunT+A9GmI5d7EHTNneP+trJofdLIFvE1JcJ1 FdlcXb42tFqpuQh3JYD3BtRz7TiImelTd/C2RaKrYapiMMHdKdqc7BXMJa76HkkHHMB2 QeRt7LjCiypJonjF1rmq61cSd8UjAaFbekByJHZXEfxjbic1BJFIYGRq6d6tuLsQ7NN4 C7mSAfa1rlFvbKrjE28NeA+h6FJc+alaNIrAUJ9RSQBKCMrNz52L7IYo3J19DkO0hQvk Jge96scEO/RFECLsCLSRXgrPIaNPP0Zl2Qgos4shsKrn7D22nGmNpR858rEJ2nS8hTy4 bl7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=fQhXw5qO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id lh4-20020a170903290400b001c470e70cf7si988974plb.273.2023.10.03.02.24.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:24:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=fQhXw5qO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 66A108028933; Tue, 3 Oct 2023 02:24:35 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239596AbjJCJY0 (ORCPT + 18 others); Tue, 3 Oct 2023 05:24:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239647AbjJCJYZ (ORCPT ); Tue, 3 Oct 2023 05:24:25 -0400 Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com [IPv6:2607:f8b0:4864:20::1133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F584D8 for ; Tue, 3 Oct 2023 02:24:22 -0700 (PDT) Received: by mail-yw1-x1133.google.com with SMTP id 00721157ae682-59e77e4f707so8703087b3.0 for ; Tue, 03 Oct 2023 02:24:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696325061; x=1696929861; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=ooRbfvgflrDJY0S/ihm6kiwAA3KtmVPaxx1CoMlS90Q=; b=fQhXw5qOFrZ+qe5pfcTUSkuk7nURhFWkGQ6lxdt4sX/fzZmHedjnxQAI/8xRJu7ohs wEHBzC6oCYe+NvDRiDOCobj3OWG4JaHCzJhdI24oS/BeCOxbsI3bhQ/EWjRp6ogYgrjk fL3EDNEXsl/EEG67YYHz+embQVFNVuAqdfbsWI6+iqqWgHD9JiTadqSO58v5fjYuFuRu T892yINe2+ADCsL+kGn5GvLU6E7P7aRHp0gcu6/xMsr/QQdMGM3+Y8kIPWgWzfwYrQAF skMj7PQm/snd8HKnFlv16jsbbHV45BhdHJWoj9RWrcPsYrbVPC+mhgoTlnZ5evEkHXGl 7tig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696325061; x=1696929861; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ooRbfvgflrDJY0S/ihm6kiwAA3KtmVPaxx1CoMlS90Q=; b=bKnEPGLK82mgCVkMZggfeiJt7toZ5b016ziUiWiCR13Cm6cnQKOuacXocC8HTh1Rj7 Z4EGILGL6Tf9El7zOb9Sf5WCBMcyu0gnqHKk6ZKG1cyNwBDgoiNPi515ULHSViP+iIl0 fIcWXh0aBjReUsb3EtWCCmHwcYtg4iHvFbtNmnQbAp+t9OJMCRJ7ZNF0+ObNGspeWyAO haMoLWHlzNUJcar2yJA/7e51jXvRxSpmdaKC0YJFcJBgOltBBdcdmmhQL+Nyk+Bxwsbm QCHhTodR6XwC5FKcWXPDht7OfCVzTpviYbEmccx9j2jHmKhKSDln5QbAbaNXTyEhPWze 19lQ== X-Gm-Message-State: AOJu0YzmKdkyQXTtGBI8CPTDOTYPSuW9K4y5WaHeyTEWd85q0pQVeR8g ZUn4T3uXW5js9jRUkyQyjIRI9A== X-Received: by 2002:a81:a08b:0:b0:58f:ae13:462b with SMTP id x133-20020a81a08b000000b0058fae13462bmr14655078ywg.4.1696325061140; Tue, 03 Oct 2023 02:24:21 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x9-20020a814a09000000b005463e45458bsm251441ywa.123.2023.10.03.02.24.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:24:20 -0700 (PDT) Date: Tue, 3 Oct 2023 02:24:18 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 08/12] mempolicy: remove confusing MPOL_MF_LAZY dead code In-Reply-To: Message-ID: <80c9665c-1c3f-17ba-21a3-f6115cebf7d@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:24:35 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725765411329689 X-GMAIL-MSGID: 1778725765411329689 v3.8 commit b24f53a0bea3 ("mm: mempolicy: Add MPOL_MF_LAZY") introduced MPOL_MF_LAZY, and included it in the MPOL_MF_VALID flags; but a720094ded8 ("mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now") immediately removed it from MPOL_MF_VALID flags, pending further review. "This will need to be revisited", but it has not been reinstated. The present state is confusing: there is dead code in mm/mempolicy.c to handle MPOL_MF_LAZY cases which can never occur. Remove that: it can be resurrected later if necessary. But keep the definition of MPOL_MF_LAZY, which must remain in the UAPI, even though it always fails with EINVAL. https://lore.kernel.org/linux-mm/1553041659-46787-1-git-send-email-yang.shi@linux.alibaba.com/ links to a previous request to remove MPOL_MF_LAZY. Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: Yang Shi --- include/uapi/linux/mempolicy.h | 2 +- mm/mempolicy.c | 18 ------------------ 2 files changed, 1 insertion(+), 19 deletions(-) diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 046d0ccba4cd..a8963f7ef4c2 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -48,7 +48,7 @@ enum { #define MPOL_MF_MOVE (1<<1) /* Move pages owned by this process to conform to policy */ #define MPOL_MF_MOVE_ALL (1<<2) /* Move every page to conform to policy */ -#define MPOL_MF_LAZY (1<<3) /* Modifies '_MOVE: lazy migrate on fault */ +#define MPOL_MF_LAZY (1<<3) /* UNSUPPORTED FLAG: Lazy migrate on fault */ #define MPOL_MF_INTERNAL (1<<4) /* Internal flags start here */ #define MPOL_MF_VALID (MPOL_MF_STRICT | \ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 5d99fd5cd60b..f3224a8b0f6c 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -636,12 +636,6 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, return nr_updated; } -#else -static unsigned long change_prot_numa(struct vm_area_struct *vma, - unsigned long addr, unsigned long end) -{ - return 0; -} #endif /* CONFIG_NUMA_BALANCING */ static int queue_pages_test_walk(unsigned long start, unsigned long end, @@ -680,14 +674,6 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, if (endvma > end) endvma = end; - if (flags & MPOL_MF_LAZY) { - /* Similar to task_numa_work, skip inaccessible VMAs */ - if (!is_vm_hugetlb_page(vma) && vma_is_accessible(vma) && - !(vma->vm_flags & VM_MIXEDMAP)) - change_prot_numa(vma, start, endvma); - return 1; - } - /* * Check page nodes, and queue pages to move, in the current vma. * But if no moving, and no strict checking, the scan can be skipped. @@ -1274,9 +1260,6 @@ static long do_mbind(unsigned long start, unsigned long len, if (IS_ERR(new)) return PTR_ERR(new); - if (flags & MPOL_MF_LAZY) - new->flags |= MPOL_F_MOF; - /* * If we are using the default policy then operation * on discontinuous address spaces is okay after all @@ -1321,7 +1304,6 @@ static long do_mbind(unsigned long start, unsigned long len, if (!err) { if (!list_empty(&pagelist)) { - WARN_ON_ONCE(flags & MPOL_MF_LAZY); nr_failed |= migrate_pages(&pagelist, new_folio, NULL, start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL); } From patchwork Tue Oct 3 09:25:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147786 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1958743vqb; Tue, 3 Oct 2023 02:26:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHNhD92ekGUygswkbwBScekGbn3aJIe3cvZiUOjibLnic2xd5Wyd6E4qosMVUrjq3fc+uAu X-Received: by 2002:a05:6871:811:b0:1dc:723d:b8d0 with SMTP id q17-20020a056871081100b001dc723db8d0mr16713160oap.27.1696325188545; Tue, 03 Oct 2023 02:26:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696325188; cv=none; d=google.com; s=arc-20160816; b=hXn3PtYSSOZAJxKNGMuOAz451KRKACaXA2whTb+JhDMKGrld0wDq7UirBFzKcktTth phqAZ+0kSfoaKixPD5qiLCFPM0eFhAUs6eymF6QQV9tQFMVFr8V9ZoloG2i4f7paVXUz UAsWQR6hSVlWilkk5jPwJdieG9uKiNQEJAuVEyoEyAuzbTMYCF6XTxsaqQXGBEGEjBdz B5QbYrSa0HhNSyLaVRC2hVIvt2xkcYvaT8tL5zY9/1HTtOX0cS1uKQCgsBxiwy5UR6In pCcE4gS3H8c4pidDeM4/aj8MBwt//Bc6YDnNVT27dHpnzV4noo3VfCEjmdUHgzeYNA+i chmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=nzmHY2Ka5cz6XnAsRZastGfanbfs6f5GjSGMfdpW808=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=qwjME23w9boZfirhltPvYIwt2JD8dGR76by+AEH5FXRcu0TgCoquw6jPKZbnuxxPWO VIedWtA2VC5WVS4c4zkElCxQ2Tv1cpFwPHebNfgiUaQMiR4fk8VamyOaOtAItnrO4PnM GF8N9uoIBEYH0eT+5Y4lxhpEdF/wvUD9+TqyEOKtVDHq3JqffvOHhiM3GB2BBJsgqHUT fSGuoFBSUTSCW3QSfWlvRJh1qXP7rW8bUMEm4bLBzQKahNWbLJYlwtvzivyz88+N7Q1B P36vEXEJlEj2aXLKMnTx5Fc3GXODcKOHh/kybmzltvHGFzjL3sWPBZov1FGpCySNebT8 aj1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=WXsA2qJJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id x4-20020a63db44000000b0056534e3aeeasi1073091pgi.474.2023.10.03.02.26.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:26:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=WXsA2qJJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id B0DC48085FBF; Tue, 3 Oct 2023 02:25:45 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231240AbjJCJZn (ORCPT + 18 others); Tue, 3 Oct 2023 05:25:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230434AbjJCJZk (ORCPT ); Tue, 3 Oct 2023 05:25:40 -0400 Received: from mail-yw1-x1135.google.com (mail-yw1-x1135.google.com [IPv6:2607:f8b0:4864:20::1135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67BC190 for ; Tue, 3 Oct 2023 02:25:37 -0700 (PDT) Received: by mail-yw1-x1135.google.com with SMTP id 00721157ae682-5a2379a8b69so8559957b3.2 for ; Tue, 03 Oct 2023 02:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696325136; x=1696929936; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=nzmHY2Ka5cz6XnAsRZastGfanbfs6f5GjSGMfdpW808=; b=WXsA2qJJtXn2/tmENv4uwUdk8kLVDDTTuWfJM2B8fR3nZJ+m58iZSZxbZ/sHM3EopG 9quWmhrQ2bK1Xv1bB7bvkKqCHgO3vc3AgDXjq9AlP7X0YC6R1664KeURqOdrp6wdWaCW mVGpb8Xshp4psTbbNZyBqMGmY9oTVoMssaOhrfo8qcJckELjB3WoWlAxF3TWFxC7+OxF Ns+0gQaVGQeB1xNfoufGbG3LzdT834qI4uRDurJRBfx56coda41z53eaQjDwRuTfa6xW Zis5wRpMg75jTEm4kRVBoR2eBVRsAgxoIHayBpwsCPdYxxnyy76222EeYSRQxH6y+7PV Tlyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696325136; x=1696929936; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nzmHY2Ka5cz6XnAsRZastGfanbfs6f5GjSGMfdpW808=; b=kRijiKOmJbgtVC79sBQ8BnPPfaNiJmaC8s1+0y1ng3ERZItwsHjslGF2acNdMiWb4M qW/WmMzMvRTCsX4RrS/m53MzbiRQ59lGROaGdrthDKJvOU5BugXsBXCpyhb+2X43vCht WfKM0zRp/IvCTpaYlyUoJgy4rwosApuTqCFrBU+Y0T9waa3s5Rg8z15JVhoHNGxIb26N 7JJg/7Ua3wTJ9uA3bx7oCBFXlmwSFRXmEc5BMrZWMiom0K3npTowBYIVExMCWPsQ6cW7 xLONAlEFi2uHessNCPT7xTMNMVzKLoIpo6MpSRln7T1eLxamoaRq0bRjNx+hRA6Wwsfo TiIg== X-Gm-Message-State: AOJu0Yy9wgLV6Mr093BhguBDv9TahB83voSsxRUNYYv8MHweTdDuG0DD 7Ez5c3i1rpwkYVKiY9s4mA6mJQ== X-Received: by 2002:a81:4f52:0:b0:599:da80:e1eb with SMTP id d79-20020a814f52000000b00599da80e1ebmr14242966ywb.24.1696325136488; Tue, 03 Oct 2023 02:25:36 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id o19-20020a81de53000000b005925765aa30sm245713ywl.135.2023.10.03.02.25.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:25:35 -0700 (PDT) Date: Tue, 3 Oct 2023 02:25:33 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 09/12] mm: add page_rmappable_folio() wrapper In-Reply-To: Message-ID: <8d92c6cf-eebe-748-e29c-c8ab224c741@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:25:45 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725880916485343 X-GMAIL-MSGID: 1778725880916485343 folio_prep_large_rmappable() is being used repeatedly along with a conversion from page to folio, a check non-NULL, a check order > 1: wrap it all up into struct folio *page_rmappable_folio(struct page *). Signed-off-by: Hugh Dickins --- mm/internal.h | 9 +++++++++ mm/mempolicy.c | 17 +++-------------- mm/page_alloc.c | 8 ++------ 3 files changed, 14 insertions(+), 20 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index d7916f1e9e98..b2b3716d1df6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -415,6 +415,15 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) void folio_undo_large_rmappable(struct folio *folio); +static inline struct folio *page_rmappable_folio(struct page *page) +{ + struct folio *folio = (struct folio *)page; + + if (folio && folio_order(folio) > 1) + folio_prep_large_rmappable(folio); + return folio; +} + static inline void prep_compound_head(struct page *page, unsigned int order) { struct folio *folio = (struct folio *)page; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f3224a8b0f6c..bfcc523a2860 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2142,10 +2142,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, mpol_cond_put(pol); gfp |= __GFP_COMP; page = alloc_page_interleave(gfp, order, nid); - folio = (struct folio *)page; - if (folio && order > 1) - folio_prep_large_rmappable(folio); - goto out; + return page_rmappable_folio(page); } if (pol->mode == MPOL_PREFERRED_MANY) { @@ -2155,10 +2152,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, gfp |= __GFP_COMP; page = alloc_pages_preferred_many(gfp, order, node, pol); mpol_cond_put(pol); - folio = (struct folio *)page; - if (folio && order > 1) - folio_prep_large_rmappable(folio); - goto out; + return page_rmappable_folio(page); } if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { @@ -2252,12 +2246,7 @@ EXPORT_SYMBOL(alloc_pages); struct folio *folio_alloc(gfp_t gfp, unsigned order) { - struct page *page = alloc_pages(gfp | __GFP_COMP, order); - struct folio *folio = (struct folio *)page; - - if (folio && order > 1) - folio_prep_large_rmappable(folio); - return folio; + return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order)); } EXPORT_SYMBOL(folio_alloc); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7df77b58a961..00f94dd88355 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4619,12 +4619,8 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask) { struct page *page = __alloc_pages(gfp | __GFP_COMP, order, - preferred_nid, nodemask); - struct folio *folio = (struct folio *)page; - - if (folio && order > 1) - folio_prep_large_rmappable(folio); - return folio; + preferred_nid, nodemask); + return page_rmappable_folio(page); } EXPORT_SYMBOL(__folio_alloc); From patchwork Tue Oct 3 09:26:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147787 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1959115vqb; Tue, 3 Oct 2023 02:27:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHNipzfPAaO8dB7QOkOJIA+cUkR0IqEMu2t7JT4mbLufRxhWG0fyUC86SJ/6lR9xebaZeuJ X-Received: by 2002:a05:6a21:182:b0:14c:5dc3:f1c9 with SMTP id le2-20020a056a21018200b0014c5dc3f1c9mr16055387pzb.49.1696325247123; Tue, 03 Oct 2023 02:27:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696325247; cv=none; d=google.com; s=arc-20160816; b=FcnJdRN24ZjPYAPbvJfu7gptyl1XzEUQ0irerVabJyIC/Lc7cORZR5Mc0KMuW/wwnU jLWMkJWQ+aiP19/wvhhkDBFtBtK2bRukF2r75mbOoZiDPP0QmP3qMziaSXs6csbirRZh sHdMQozlMs+zVhw53XUU9lBYdBFC5y2vdwnzlY0hWXtVZoA3005n9YkU8ZUcVqsI7fxg 1na6XC/4ENW1LoJXpeiF2MfyOzQryJOvCPJ5yhM4Faxeq2vFtwUBfZj/yGoK5+3q0APt GNOoyLlHTQj+LKS3/OTBR8JauHg+0y9ufe2T852hxOEJj+E0zhcXZCHqIpfAW1Zu3iIu 3Ldw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=3qvCe1A/3kG45e5iAe6KCdJKaMNhAMxBLRsWH7ioZ0U=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=zjwUgMjLdn0yGzmUyJ0mNCE20Jjqvumbf04CCW52sxi25A/vwkuSd4O66JznmdWErH HOndNdwdYtdkl46Ulde8Dd2fHovWnPPrOAuNo5jGPSGeSFK12UASXNyfk5/K5Vqjydmd ylppl+dTve2lpMva73hzHLkuanJNzSQwKYTDZeLRgpXIRVtF6n0ajM+ODKw2BQi+4jsn 7n7HeONPRc2h7TXMFy8fwSM2EOyXvtze9ORlFQmrIFmONrqvKM4NL6SyVML5VCboZNyn MWfGmR1Nh+mPqXOehj1/Yn7XaeNbIDAxDM/fPUKX0aWoWN8WhAGt+VqRqmAqNr7uuxwV TYLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Cb5yjeJD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id s21-20020a170903215500b001b8a8f359fcsi962949ple.208.2023.10.03.02.27.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:27:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Cb5yjeJD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 1C5C0802F56F; Tue, 3 Oct 2023 02:27:22 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239412AbjJCJ0y (ORCPT + 18 others); Tue, 3 Oct 2023 05:26:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231439AbjJCJ0x (ORCPT ); Tue, 3 Oct 2023 05:26:53 -0400 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EFA5AB for ; Tue, 3 Oct 2023 02:26:46 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-59bf1dde73fso8550567b3.3 for ; Tue, 03 Oct 2023 02:26:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696325206; x=1696930006; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=3qvCe1A/3kG45e5iAe6KCdJKaMNhAMxBLRsWH7ioZ0U=; b=Cb5yjeJDFwZJRM5xKZEe7TiR6F4ab1Cf0Iwm5z6pHjYJbisxnULBy1yT7m3d1NWajX uRkznjubPWg7RvoYHmbG+HZToyq4F004NUCDG+bTtdwZ5U+MBuGpo0eBvwZYu4OLIG/k 0kTF9zwOr52wx42/MxvkzqqYXHwf7gWXnQr+6arqbEFe5+iXXt/H1lEkUD/q+llUqq4h Puc3DOIwgoUg723rgkmoXnaBmWEtiRUexGiKI0HEGuSYFWcE43sHf5ilzBZtP3VPyCvg CSxsjgTpTEZjwe9yhjQAgJa9D69Ffo3v/9v9HT1eDdk5Cd9FdXduTuU9KcQKPyudXT6m +6Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696325206; x=1696930006; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3qvCe1A/3kG45e5iAe6KCdJKaMNhAMxBLRsWH7ioZ0U=; b=Oa2+gw91I2myoZVTrBB2deJ4CFgVyi/G5TGR/L9IteOhWjLP7WZouqGUQfcnrObEby O+xrHBAddsStZcbCRfAUEmMeI0i8LjSEs4cRmKjCtLlwhkZFN1YWCwKSC98NHBPsKPEs /caIBThM7eCJPtpZuAcgoOF4L437IBpPKRz+WEmuQouoe1JIhxWJr0hvyeHhcyHtxiYi /mdB+Erej3lpauV2qVRe0bEaK7a4+IJ915X/6XVYDdZDvdhq1uqzPDYpcz0zBJW5jwGe Mea4E877TVJXcFhqSAk7US50maegIrzANlS/JHx4mC8Rb4qhnr88SxJKwXOpQBhSQ3fr EvZA== X-Gm-Message-State: AOJu0YwRZG0pehxYVrA8hVRD25OO6ouzT2H8zHbjTbY9dnrPu3qxI4P1 BlgHSWsIuMHlHr+W3Cri9y/X1g== X-Received: by 2002:a0d:eb16:0:b0:59c:8b3:6850 with SMTP id u22-20020a0deb16000000b0059c08b36850mr14504709ywe.4.1696325205371; Tue, 03 Oct 2023 02:26:45 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id h4-20020a0df704000000b0058fc7604f45sm252385ywf.130.2023.10.03.02.26.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:26:44 -0700 (PDT) Date: Tue, 3 Oct 2023 02:26:42 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma In-Reply-To: Message-ID: <74e34633-6060-f5e3-aee-7040d43f2e93@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:27:22 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725942455664660 X-GMAIL-MSGID: 1778725942455664660 Shrink shmem's stack usage by eliminating the pseudo-vma from its folio allocation. alloc_pages_mpol(gfp, order, pol, ilx, nid) becomes the principal actor for passing mempolicy choice down to __alloc_pages(), rather than vma_alloc_folio(gfp, order, vma, addr, hugepage). vma_alloc_folio() and alloc_pages() remain, but as wrappers around alloc_pages_mpol(). alloc_pages_bulk_*() untouched, except to provide the additional args to policy_nodemask(), which subsumes policy_node(). Cleanup throughout, cutting out some unhelpful "helpers". It would all be much simpler without MPOL_INTERLEAVE, but that adds a dynamic to the constant mpol: complicated by v3.6 commit 09c231cb8bfd ("tmpfs: distribute interleave better across nodes"), which added ino bias to the interleave, hidden from mm/mempolicy.c until this commit. Hence "ilx" throughout, the "interleave index". Originally I thought it could be done just with nid, but that's wrong: the nodemask may come from the shared policy layer below a shmem vma, or it may come from the task layer above a shmem vma; and without the final nodemask then nodeid cannot be decided. And how ilx is applied depends also on page order. The interleave index is almost always irrelevant unless MPOL_INTERLEAVE: with one exception in alloc_pages_mpol(), where the NO_INTERLEAVE_INDEX passed down from vma-less alloc_pages() is also used as hint not to use THP-style hugepage allocation - to avoid the overhead of a hugepage arg (though I don't understand why we never just added a GFP bit for THP - if it actually needs a different allocation strategy from other pages of the same order). vma_alloc_folio() still carries its hugepage arg here, but it is not used, and should be removed when agreed. get_vma_policy() no longer allows a NULL vma: over time I believe we've eradicated all the places which used to need it e.g. swapoff and madvise used to pass NULL vma to read_swap_cache_async(), but now know the vma. Signed-off-by: Hugh Dickins --- fs/proc/task_mmu.c | 5 +- include/linux/gfp.h | 10 +- include/linux/mempolicy.h | 13 +- include/linux/mm.h | 2 +- ipc/shm.c | 21 +-- mm/mempolicy.c | 383 +++++++++++++++++----------------------- mm/shmem.c | 92 +++++----- mm/swap.h | 9 +- mm/swap_state.c | 83 +++++---- 9 files changed, 297 insertions(+), 321 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5a302f1b8d68..6cc4720eabf0 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -2637,8 +2637,9 @@ static int show_numa_map(struct seq_file *m, void *v) struct numa_maps *md = &numa_priv->md; struct file *file = vma->vm_file; struct mm_struct *mm = vma->vm_mm; - struct mempolicy *pol; char buffer[64]; + struct mempolicy *pol; + pgoff_t ilx; int nid; if (!mm) @@ -2647,7 +2648,7 @@ static int show_numa_map(struct seq_file *m, void *v) /* Ensure we start with an empty set of numa_maps statistics. */ memset(md, 0, sizeof(*md)); - pol = __get_vma_policy(vma, vma->vm_start); + pol = __get_vma_policy(vma, vma->vm_start, &ilx); if (pol) { mpol_to_str(buffer, sizeof(buffer), pol); mpol_cond_put(pol); diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 5b917e5b9350..de292a007138 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -8,6 +8,7 @@ #include struct vm_area_struct; +struct mempolicy; /* Convert GFP flags to their corresponding migrate type */ #define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE) @@ -262,7 +263,9 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, #ifdef CONFIG_NUMA struct page *alloc_pages(gfp_t gfp, unsigned int order); -struct folio *folio_alloc(gfp_t gfp, unsigned order); +struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, + struct mempolicy *mpol, pgoff_t ilx, int nid); +struct folio *folio_alloc(gfp_t gfp, unsigned int order); struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage); #else @@ -270,6 +273,11 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) { return alloc_pages_node(numa_node_id(), gfp_mask, order); } +static inline struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, + struct mempolicy *mpol, pgoff_t ilx, int nid) +{ + return alloc_pages(gfp, order); +} static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) { return __folio_alloc_node(gfp, order, numa_node_id()); diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index c69f9480d5e4..3c208d4f0ee9 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -128,7 +128,9 @@ struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp, struct mempolicy *get_task_policy(struct task_struct *p); struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, - unsigned long addr); + unsigned long addr, pgoff_t *ilx); +struct mempolicy *get_vma_policy(struct vm_area_struct *vma, + unsigned long addr, int order, pgoff_t *ilx); bool vma_policy_mof(struct vm_area_struct *vma); extern void numa_default_policy(void); @@ -142,8 +144,6 @@ extern int huge_node(struct vm_area_struct *vma, extern bool init_nodemask_of_mempolicy(nodemask_t *mask); extern bool mempolicy_in_oom_domain(struct task_struct *tsk, const nodemask_t *mask); -extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy); - extern unsigned int mempolicy_slab_node(void); extern enum zone_type policy_zone; @@ -215,6 +215,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, pgoff_t idx) return NULL; } +static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma, + unsigned long addr, int order, pgoff_t *ilx) +{ + *ilx = 0; + return NULL; +} + #define vma_policy(vma) NULL static inline int diff --git a/include/linux/mm.h b/include/linux/mm.h index 52c40b3d0813..9b86a9c35427 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -619,7 +619,7 @@ struct vm_operations_struct { * policy. */ struct mempolicy *(*get_policy)(struct vm_area_struct *vma, - unsigned long addr); + unsigned long addr, pgoff_t *ilx); #endif /* * Called by vm_normal_page() for special PTEs to find the diff --git a/ipc/shm.c b/ipc/shm.c index 576a543b7cff..222aaf035afb 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -562,30 +562,25 @@ static unsigned long shm_pagesize(struct vm_area_struct *vma) } #ifdef CONFIG_NUMA -static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *new) +static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol) { - struct file *file = vma->vm_file; - struct shm_file_data *sfd = shm_file_data(file); + struct shm_file_data *sfd = shm_file_data(vma->vm_file); int err = 0; if (sfd->vm_ops->set_policy) - err = sfd->vm_ops->set_policy(vma, new); + err = sfd->vm_ops->set_policy(vma, mpol); return err; } static struct mempolicy *shm_get_policy(struct vm_area_struct *vma, - unsigned long addr) + unsigned long addr, pgoff_t *ilx) { - struct file *file = vma->vm_file; - struct shm_file_data *sfd = shm_file_data(file); - struct mempolicy *pol = NULL; + struct shm_file_data *sfd = shm_file_data(vma->vm_file); + struct mempolicy *mpol = vma->vm_policy; if (sfd->vm_ops->get_policy) - pol = sfd->vm_ops->get_policy(vma, addr); - else if (vma->vm_policy) - pol = vma->vm_policy; - - return pol; + mpol = sfd->vm_ops->get_policy(vma, addr, ilx); + return mpol; } #endif diff --git a/mm/mempolicy.c b/mm/mempolicy.c index bfcc523a2860..8cf76de12acd 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -114,6 +114,8 @@ #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ #define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */ +#define NO_INTERLEAVE_INDEX (-1UL) + static struct kmem_cache *policy_cache; static struct kmem_cache *sn_cache; @@ -918,6 +920,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, } if (flags & MPOL_F_ADDR) { + pgoff_t ilx; /* ignored here */ /* * Do NOT fall back to task policy if the * vma/shared policy at addr is NULL. We @@ -929,10 +932,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, mmap_read_unlock(mm); return -EFAULT; } - if (vma->vm_ops && vma->vm_ops->get_policy) - pol = vma->vm_ops->get_policy(vma, addr); - else - pol = vma->vm_policy; + pol = __get_vma_policy(vma, addr, &ilx); } else if (addr) return -EINVAL; @@ -1190,6 +1190,15 @@ static struct folio *new_folio(struct folio *src, unsigned long start) break; } + /* + * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL + * when the page can no longer be located in a vma: that is not ideal + * (migrate_pages() will give up early, presuming ENOMEM), but good + * enough to avoid a crash by syzkaller or concurrent holepunch. + */ + if (!vma) + return NULL; + if (folio_test_hugetlb(src)) { return alloc_hugetlb_folio_vma(folio_hstate(src), vma, address); @@ -1198,9 +1207,6 @@ static struct folio *new_folio(struct folio *src, unsigned long start) if (folio_test_large(src)) gfp = GFP_TRANSHUGE; - /* - * if !vma, vma_alloc_folio() will use task or system default policy - */ return vma_alloc_folio(gfp, folio_order(src), vma, address, folio_test_large(src)); } @@ -1710,34 +1716,19 @@ bool vma_migratable(struct vm_area_struct *vma) } struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, - unsigned long addr) + unsigned long addr, pgoff_t *ilx) { - struct mempolicy *pol = NULL; - - if (vma) { - if (vma->vm_ops && vma->vm_ops->get_policy) { - pol = vma->vm_ops->get_policy(vma, addr); - } else if (vma->vm_policy) { - pol = vma->vm_policy; - - /* - * shmem_alloc_page() passes MPOL_F_SHARED policy with - * a pseudo vma whose vma->vm_ops=NULL. Take a reference - * count on these policies which will be dropped by - * mpol_cond_put() later - */ - if (mpol_needs_cond_ref(pol)) - mpol_get(pol); - } - } - - return pol; + *ilx = 0; + return (vma->vm_ops && vma->vm_ops->get_policy) ? + vma->vm_ops->get_policy(vma, addr, ilx) : vma->vm_policy; } /* - * get_vma_policy(@vma, @addr) + * get_vma_policy(@vma, @addr, @order, @ilx) * @vma: virtual memory area whose policy is sought * @addr: address in @vma for shared policy lookup + * @order: 0, or appropriate huge_page_order for interleaving + * @ilx: interleave index (output), for use only when MPOL_INTERLEAVE * * Returns effective policy for a VMA at specified address. * Falls back to current->mempolicy or system default policy, as necessary. @@ -1746,14 +1737,18 @@ struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, * freeing by another task. It is the caller's responsibility to free the * extra reference for shared policies. */ -static struct mempolicy *get_vma_policy(struct vm_area_struct *vma, - unsigned long addr) +struct mempolicy *get_vma_policy(struct vm_area_struct *vma, + unsigned long addr, int order, pgoff_t *ilx) { - struct mempolicy *pol = __get_vma_policy(vma, addr); + struct mempolicy *pol; + pol = __get_vma_policy(vma, addr, ilx); if (!pol) pol = get_task_policy(current); - + if (pol->mode == MPOL_INTERLEAVE) { + *ilx += vma->vm_pgoff >> order; + *ilx += (addr - vma->vm_start) >> (PAGE_SHIFT + order); + } return pol; } @@ -1763,8 +1758,9 @@ bool vma_policy_mof(struct vm_area_struct *vma) if (vma->vm_ops && vma->vm_ops->get_policy) { bool ret = false; + pgoff_t ilx; /* ignored here */ - pol = vma->vm_ops->get_policy(vma, vma->vm_start); + pol = vma->vm_ops->get_policy(vma, vma->vm_start, &ilx); if (pol && (pol->flags & MPOL_F_MOF)) ret = true; mpol_cond_put(pol); @@ -1799,54 +1795,6 @@ bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone) return zone >= dynamic_policy_zone; } -/* - * Return a nodemask representing a mempolicy for filtering nodes for - * page allocation - */ -nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) -{ - int mode = policy->mode; - - /* Lower zones don't get a nodemask applied for MPOL_BIND */ - if (unlikely(mode == MPOL_BIND) && - apply_policy_zone(policy, gfp_zone(gfp)) && - cpuset_nodemask_valid_mems_allowed(&policy->nodes)) - return &policy->nodes; - - if (mode == MPOL_PREFERRED_MANY) - return &policy->nodes; - - return NULL; -} - -/* - * Return the preferred node id for 'prefer' mempolicy, and return - * the given id for all other policies. - * - * policy_node() is always coupled with policy_nodemask(), which - * secures the nodemask limit for 'bind' and 'prefer-many' policy. - */ -static int policy_node(gfp_t gfp, struct mempolicy *policy, int nid) -{ - if (policy->mode == MPOL_PREFERRED) { - nid = first_node(policy->nodes); - } else { - /* - * __GFP_THISNODE shouldn't even be used with the bind policy - * because we might easily break the expectation to stay on the - * requested node and not break the policy. - */ - WARN_ON_ONCE(policy->mode == MPOL_BIND && (gfp & __GFP_THISNODE)); - } - - if ((policy->mode == MPOL_BIND || - policy->mode == MPOL_PREFERRED_MANY) && - policy->home_node != NUMA_NO_NODE) - return policy->home_node; - - return nid; -} - /* Do dynamic interleaving for a process */ static unsigned int interleave_nodes(struct mempolicy *policy) { @@ -1906,11 +1854,11 @@ unsigned int mempolicy_slab_node(void) } /* - * Do static interleaving for a VMA with known offset @n. Returns the n'th - * node in pol->nodes (starting from n=0), wrapping around if n exceeds the - * number of present nodes. + * Do static interleaving for interleave index @ilx. Returns the ilx'th + * node in pol->nodes (starting from ilx=0), wrapping around if ilx + * exceeds the number of present nodes. */ -static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) +static unsigned int interleave_nid(struct mempolicy *pol, pgoff_t ilx) { nodemask_t nodemask = pol->nodes; unsigned int target, nnodes; @@ -1928,33 +1876,54 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) nnodes = nodes_weight(nodemask); if (!nnodes) return numa_node_id(); - target = (unsigned int)n % nnodes; + target = ilx % nnodes; nid = first_node(nodemask); for (i = 0; i < target; i++) nid = next_node(nid, nodemask); return nid; } -/* Determine a node number for interleave */ -static inline unsigned interleave_nid(struct mempolicy *pol, - struct vm_area_struct *vma, unsigned long addr, int shift) +/* + * Return a nodemask representing a mempolicy for filtering nodes for + * page allocation, together with preferred node id (or the input node id). + */ +static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol, + pgoff_t ilx, int *nid) { - if (vma) { - unsigned long off; + nodemask_t *nodemask = NULL; + switch (pol->mode) { + case MPOL_PREFERRED: + /* Override input node id */ + *nid = first_node(pol->nodes); + break; + case MPOL_PREFERRED_MANY: + nodemask = &pol->nodes; + if (pol->home_node != NUMA_NO_NODE) + *nid = pol->home_node; + break; + case MPOL_BIND: + /* Restrict to nodemask (but not on lower zones) */ + if (apply_policy_zone(pol, gfp_zone(gfp)) && + cpuset_nodemask_valid_mems_allowed(&pol->nodes)) + nodemask = &pol->nodes; + if (pol->home_node != NUMA_NO_NODE) + *nid = pol->home_node; /* - * for small pages, there is no difference between - * shift and PAGE_SHIFT, so the bit-shift is safe. - * for huge pages, since vm_pgoff is in units of small - * pages, we need to shift off the always 0 bits to get - * a useful offset. + * __GFP_THISNODE shouldn't even be used with the bind policy + * because we might easily break the expectation to stay on the + * requested node and not break the policy. */ - BUG_ON(shift < PAGE_SHIFT); - off = vma->vm_pgoff >> (shift - PAGE_SHIFT); - off += (addr - vma->vm_start) >> shift; - return offset_il_node(pol, off); - } else - return interleave_nodes(pol); + WARN_ON_ONCE(gfp & __GFP_THISNODE); + break; + case MPOL_INTERLEAVE: + /* Override input node id */ + *nid = (ilx == NO_INTERLEAVE_INDEX) ? + interleave_nodes(pol) : interleave_nid(pol, ilx); + break; + } + + return nodemask; } #ifdef CONFIG_HUGETLBFS @@ -1970,27 +1939,16 @@ static inline unsigned interleave_nid(struct mempolicy *pol, * to the struct mempolicy for conditional unref after allocation. * If the effective policy is 'bind' or 'prefer-many', returns a pointer * to the mempolicy's @nodemask for filtering the zonelist. - * - * Must be protected by read_mems_allowed_begin() */ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, - struct mempolicy **mpol, nodemask_t **nodemask) + struct mempolicy **mpol, nodemask_t **nodemask) { + pgoff_t ilx; int nid; - int mode; - *mpol = get_vma_policy(vma, addr); - *nodemask = NULL; - mode = (*mpol)->mode; - - if (unlikely(mode == MPOL_INTERLEAVE)) { - nid = interleave_nid(*mpol, vma, addr, - huge_page_shift(hstate_vma(vma))); - } else { - nid = policy_node(gfp_flags, *mpol, numa_node_id()); - if (mode == MPOL_BIND || mode == MPOL_PREFERRED_MANY) - *nodemask = &(*mpol)->nodes; - } + nid = numa_node_id(); + *mpol = get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); + *nodemask = policy_nodemask(gfp_flags, *mpol, ilx, &nid); return nid; } @@ -2068,27 +2026,8 @@ bool mempolicy_in_oom_domain(struct task_struct *tsk, return ret; } -/* Allocate a page in interleaved policy. - Own path because it needs to do special accounting. */ -static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, - unsigned nid) -{ - struct page *page; - - page = __alloc_pages(gfp, order, nid, NULL); - /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ - if (!static_branch_likely(&vm_numa_stat_key)) - return page; - if (page && page_to_nid(page) == nid) { - preempt_disable(); - __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT); - preempt_enable(); - } - return page; -} - static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, - int nid, struct mempolicy *pol) + int nid, nodemask_t *nodemask) { struct page *page; gfp_t preferred_gfp; @@ -2101,7 +2040,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, */ preferred_gfp = gfp | __GFP_NOWARN; preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - page = __alloc_pages(preferred_gfp, order, nid, &pol->nodes); + page = __alloc_pages(preferred_gfp, order, nid, nodemask); if (!page) page = __alloc_pages(gfp, order, nid, NULL); @@ -2109,55 +2048,29 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, } /** - * vma_alloc_folio - Allocate a folio for a VMA. + * alloc_pages_mpol - Allocate pages according to NUMA mempolicy. * @gfp: GFP flags. - * @order: Order of the folio. - * @vma: Pointer to VMA or NULL if not available. - * @addr: Virtual address of the allocation. Must be inside @vma. - * @hugepage: For hugepages try only the preferred node if possible. + * @order: Order of the page allocation. + * @pol: Pointer to the NUMA mempolicy. + * @ilx: Index for interleave mempolicy (also distinguishes alloc_pages()). + * @nid: Preferred node (usually numa_node_id() but @mpol may override it). * - * Allocate a folio for a specific address in @vma, using the appropriate - * NUMA policy. When @vma is not NULL the caller must hold the mmap_lock - * of the mm_struct of the VMA to prevent it from going away. Should be - * used for all allocations for folios that will be mapped into user space. - * - * Return: The folio on success or NULL if allocation fails. + * Return: The page on success or NULL if allocation fails. */ -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, - unsigned long addr, bool hugepage) +struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, + struct mempolicy *pol, pgoff_t ilx, int nid) { - struct mempolicy *pol; - int node = numa_node_id(); - struct folio *folio; - int preferred_nid; - nodemask_t *nmask; + nodemask_t *nodemask; + struct page *page; - pol = get_vma_policy(vma, addr); + nodemask = policy_nodemask(gfp, pol, ilx, &nid); - if (pol->mode == MPOL_INTERLEAVE) { - struct page *page; - unsigned nid; - - nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order); - mpol_cond_put(pol); - gfp |= __GFP_COMP; - page = alloc_page_interleave(gfp, order, nid); - return page_rmappable_folio(page); - } - - if (pol->mode == MPOL_PREFERRED_MANY) { - struct page *page; - - node = policy_node(gfp, pol, node); - gfp |= __GFP_COMP; - page = alloc_pages_preferred_many(gfp, order, node, pol); - mpol_cond_put(pol); - return page_rmappable_folio(page); - } - - if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { - int hpage_node = node; + if (pol->mode == MPOL_PREFERRED_MANY) + return alloc_pages_preferred_many(gfp, order, nid, nodemask); + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && + /* filter "hugepage" allocation, unless from alloc_pages() */ + order == HPAGE_PMD_ORDER && ilx != NO_INTERLEAVE_INDEX) { /* * For hugepage allocation and non-interleave policy which * allows the current node (or other explicitly preferred @@ -2168,39 +2081,68 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, * If the policy is interleave or does not allow the current * node in its nodemask, we allocate the standard way. */ - if (pol->mode == MPOL_PREFERRED) - hpage_node = first_node(pol->nodes); - - nmask = policy_nodemask(gfp, pol); - if (!nmask || node_isset(hpage_node, *nmask)) { - mpol_cond_put(pol); + if (pol->mode != MPOL_INTERLEAVE && + (!nodemask || node_isset(nid, *nodemask))) { /* * First, try to allocate THP only on local node, but * don't reclaim unnecessarily, just compact. */ - folio = __folio_alloc_node(gfp | __GFP_THISNODE | - __GFP_NORETRY, order, hpage_node); - + page = __alloc_pages_node(nid, + gfp | __GFP_THISNODE | __GFP_NORETRY, order); + if (page || !(gfp & __GFP_DIRECT_RECLAIM)) + return page; /* * If hugepage allocations are configured to always * synchronous compact or the vma has been madvised * to prefer hugepage backing, retry allowing remote * memory with both reclaim and compact as well. */ - if (!folio && (gfp & __GFP_DIRECT_RECLAIM)) - folio = __folio_alloc(gfp, order, hpage_node, - nmask); - - goto out; } } - nmask = policy_nodemask(gfp, pol); - preferred_nid = policy_node(gfp, pol, node); - folio = __folio_alloc(gfp, order, preferred_nid, nmask); + page = __alloc_pages(gfp, order, nid, nodemask); + + if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) { + /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */ + if (static_branch_likely(&vm_numa_stat_key) && + page_to_nid(page) == nid) { + preempt_disable(); + __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT); + preempt_enable(); + } + } + + return page; +} + +/** + * vma_alloc_folio - Allocate a folio for a VMA. + * @gfp: GFP flags. + * @order: Order of the folio. + * @vma: Pointer to VMA. + * @addr: Virtual address of the allocation. Must be inside @vma. + * @hugepage: Unused (was: For hugepages try only preferred node if possible). + * + * Allocate a folio for a specific address in @vma, using the appropriate + * NUMA policy. The caller must hold the mmap_lock of the mm_struct of the + * VMA to prevent it from going away. Should be used for all allocations + * for folios that will be mapped into user space, excepting hugetlbfs, and + * excepting where direct use of alloc_pages_mpol() is more appropriate. + * + * Return: The folio on success or NULL if allocation fails. + */ +struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, + unsigned long addr, bool hugepage) +{ + struct mempolicy *pol; + pgoff_t ilx; + struct page *page; + + pol = get_vma_policy(vma, addr, order, &ilx); + page = alloc_pages_mpol(gfp | __GFP_COMP, order, + pol, ilx, numa_node_id()); mpol_cond_put(pol); -out: - return folio; + return page_rmappable_folio(page); } EXPORT_SYMBOL(vma_alloc_folio); @@ -2218,33 +2160,23 @@ EXPORT_SYMBOL(vma_alloc_folio); * flags are used. * Return: The page on success or NULL if allocation fails. */ -struct page *alloc_pages(gfp_t gfp, unsigned order) +struct page *alloc_pages(gfp_t gfp, unsigned int order) { struct mempolicy *pol = &default_policy; - struct page *page; - - if (!in_interrupt() && !(gfp & __GFP_THISNODE)) - pol = get_task_policy(current); /* * No reference counting needed for current->mempolicy * nor system default_policy */ - if (pol->mode == MPOL_INTERLEAVE) - page = alloc_page_interleave(gfp, order, interleave_nodes(pol)); - else if (pol->mode == MPOL_PREFERRED_MANY) - page = alloc_pages_preferred_many(gfp, order, - policy_node(gfp, pol, numa_node_id()), pol); - else - page = __alloc_pages(gfp, order, - policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol)); + if (!in_interrupt() && !(gfp & __GFP_THISNODE)) + pol = get_task_policy(current); - return page; + return alloc_pages_mpol(gfp, order, + pol, NO_INTERLEAVE_INDEX, numa_node_id()); } EXPORT_SYMBOL(alloc_pages); -struct folio *folio_alloc(gfp_t gfp, unsigned order) +struct folio *folio_alloc(gfp_t gfp, unsigned int order) { return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order)); } @@ -2315,6 +2247,8 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { struct mempolicy *pol = &default_policy; + nodemask_t *nodemask; + int nid; if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); @@ -2327,9 +2261,10 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, return alloc_pages_bulk_array_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); - return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol), nr_pages, NULL, - page_array); + nid = numa_node_id(); + nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); + return __alloc_pages_bulk(gfp, nid, nodemask, + nr_pages, NULL, page_array); } int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) @@ -2516,23 +2451,21 @@ int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma, unsigned long addr) { struct mempolicy *pol; + pgoff_t ilx; struct zoneref *z; int curnid = folio_nid(folio); - unsigned long pgoff; int thiscpu = raw_smp_processor_id(); int thisnid = cpu_to_node(thiscpu); int polnid = NUMA_NO_NODE; int ret = NUMA_NO_NODE; - pol = get_vma_policy(vma, addr); + pol = get_vma_policy(vma, addr, folio_order(folio), &ilx); if (!(pol->flags & MPOL_F_MOF)) goto out; switch (pol->mode) { case MPOL_INTERLEAVE: - pgoff = vma->vm_pgoff; - pgoff += (addr - vma->vm_start) >> PAGE_SHIFT; - polnid = offset_il_node(pol, pgoff); + polnid = interleave_nid(pol, ilx); break; case MPOL_PREFERRED: diff --git a/mm/shmem.c b/mm/shmem.c index a3ec5d2dda9a..6503910b0f54 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1561,38 +1561,20 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo) return NULL; } #endif /* CONFIG_NUMA && CONFIG_TMPFS */ -#ifndef CONFIG_NUMA -#define vm_policy vm_private_data -#endif -static void shmem_pseudo_vma_init(struct vm_area_struct *vma, - struct shmem_inode_info *info, pgoff_t index) -{ - /* Create a pseudo vma that just contains the policy */ - vma_init(vma, NULL); - /* Bias interleave by inode number to distribute better across nodes */ - vma->vm_pgoff = index + info->vfs_inode.i_ino; - vma->vm_policy = mpol_shared_policy_lookup(&info->policy, index); -} +static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info, + pgoff_t index, unsigned int order, pgoff_t *ilx); -static void shmem_pseudo_vma_destroy(struct vm_area_struct *vma) -{ - /* Drop reference taken by mpol_shared_policy_lookup() */ - mpol_cond_put(vma->vm_policy); -} - -static struct folio *shmem_swapin(swp_entry_t swap, gfp_t gfp, +static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { - struct vm_area_struct pvma; + struct mempolicy *mpol; + pgoff_t ilx; struct page *page; - struct vm_fault vmf = { - .vma = &pvma, - }; - shmem_pseudo_vma_init(&pvma, info, index); - page = swap_cluster_readahead(swap, gfp, &vmf); - shmem_pseudo_vma_destroy(&pvma); + mpol = shmem_get_pgoff_policy(info, index, 0, &ilx); + page = swap_cluster_readahead(swap, gfp, mpol, ilx); + mpol_cond_put(mpol); if (!page) return NULL; @@ -1626,27 +1608,29 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) static struct folio *shmem_alloc_hugefolio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { - struct vm_area_struct pvma; - struct folio *folio; + struct mempolicy *mpol; + pgoff_t ilx; + struct page *page; - shmem_pseudo_vma_init(&pvma, info, index); - folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); - shmem_pseudo_vma_destroy(&pvma); + mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx); + page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id()); + mpol_cond_put(mpol); - return folio; + return page_rmappable_folio(page); } static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { - struct vm_area_struct pvma; - struct folio *folio; + struct mempolicy *mpol; + pgoff_t ilx; + struct page *page; - shmem_pseudo_vma_init(&pvma, info, index); - folio = vma_alloc_folio(gfp, 0, &pvma, 0, false); - shmem_pseudo_vma_destroy(&pvma); + mpol = shmem_get_pgoff_policy(info, index, 0, &ilx); + page = alloc_pages_mpol(gfp, 0, mpol, ilx, numa_node_id()); + mpol_cond_put(mpol); - return folio; + return (struct folio *)page; } static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, @@ -1900,7 +1884,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, count_memcg_event_mm(fault_mm, PGMAJFAULT); } /* Here we actually start the io */ - folio = shmem_swapin(swap, gfp, info, index); + folio = shmem_swapin_cluster(swap, gfp, info, index); if (!folio) { error = -ENOMEM; goto failed; @@ -2351,15 +2335,41 @@ static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol) } static struct mempolicy *shmem_get_policy(struct vm_area_struct *vma, - unsigned long addr) + unsigned long addr, pgoff_t *ilx) { struct inode *inode = file_inode(vma->vm_file); pgoff_t index; + /* + * Bias interleave by inode number to distribute better across nodes; + * but this interface is independent of which page order is used, so + * supplies only that bias, letting caller apply the offset (adjusted + * by page order, as in shmem_get_pgoff_policy() and get_vma_policy()). + */ + *ilx = inode->i_ino; index = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; return mpol_shared_policy_lookup(&SHMEM_I(inode)->policy, index); } -#endif + +static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info, + pgoff_t index, unsigned int order, pgoff_t *ilx) +{ + struct mempolicy *mpol; + + /* Bias interleave by inode number to distribute better across nodes */ + *ilx = info->vfs_inode.i_ino + (index >> order); + + mpol = mpol_shared_policy_lookup(&info->policy, index); + return mpol ? mpol : get_task_policy(current); +} +#else +static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info, + pgoff_t index, unsigned int order, pgoff_t *ilx) +{ + *ilx = 0; + return NULL; +} +#endif /* CONFIG_NUMA */ int shmem_lock(struct file *file, int lock, struct ucounts *ucounts) { diff --git a/mm/swap.h b/mm/swap.h index 8a3c7a0ace4f..73c332ee4d91 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -2,6 +2,8 @@ #ifndef _MM_SWAP_H #define _MM_SWAP_H +struct mempolicy; + #ifdef CONFIG_SWAP #include /* for bio_end_io_t */ @@ -48,11 +50,10 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, unsigned long addr, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, - struct vm_area_struct *vma, - unsigned long addr, + struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated); struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf); + struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); @@ -80,7 +81,7 @@ static inline void show_swap_cache_info(void) } static inline struct page *swap_cluster_readahead(swp_entry_t entry, - gfp_t gfp_mask, struct vm_fault *vmf) + gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx) { return NULL; } diff --git a/mm/swap_state.c b/mm/swap_state.c index 788e36a06c34..4afa4ed464d2 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -411,8 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, } struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, - struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated) + struct mempolicy *mpol, pgoff_t ilx, + bool *new_page_allocated) { struct swap_info_struct *si; struct folio *folio; @@ -455,7 +456,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will * cause any racers to loop around until we add it to cache. */ - folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false); + folio = (struct folio *)alloc_pages_mpol(gfp_mask, 0, + mpol, ilx, numa_node_id()); if (!folio) goto fail_put_swap; @@ -547,14 +549,19 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, struct swap_iocb **plug) { - bool page_was_allocated; - struct page *retpage = __read_swap_cache_async(entry, gfp_mask, - vma, addr, &page_was_allocated); + bool page_allocated; + struct mempolicy *mpol; + pgoff_t ilx; + struct page *page; - if (page_was_allocated) - swap_readpage(retpage, false, plug); + mpol = get_vma_policy(vma, addr, 0, &ilx); + page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + &page_allocated); + mpol_cond_put(mpol); - return retpage; + if (page_allocated) + swap_readpage(page, false, plug); + return page; } static unsigned int __swapin_nr_pages(unsigned long prev_offset, @@ -638,7 +645,8 @@ static void inc_nr_protected(struct page *page) * swap_cluster_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory * @gfp_mask: memory allocation flags - * @vmf: fault information + * @mpol: NUMA memory allocation policy to be applied + * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE * * Returns the struct page for entry and addr, after queueing swapin. * @@ -647,13 +655,12 @@ static void inc_nr_protected(struct page *page) * because it doesn't cost us any seek time. We also make sure to queue * the 'original' request together with the readahead ones... * - * This has been extended to use the NUMA policies from the mm triggering - * the readahead. - * - * Caller must hold read mmap_lock if vmf->vma is not NULL. + * Note: it is intentional that the same NUMA policy and interleave index + * are used for every page of the readahead: neighbouring pages on swap + * are fairly likely to have been swapped out from the same node. */ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) + struct mempolicy *mpol, pgoff_t ilx) { struct page *page; unsigned long entry_offset = swp_offset(entry); @@ -664,8 +671,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct blk_plug plug; struct swap_iocb *splug = NULL; bool page_allocated; - struct vm_area_struct *vma = vmf->vma; - unsigned long addr = vmf->address; mask = swapin_nr_pages(offset) - 1; if (!mask) @@ -683,8 +688,8 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ page = __read_swap_cache_async( - swp_entry(swp_type(entry), offset), - gfp_mask, vma, addr, &page_allocated); + swp_entry(swp_type(entry), offset), + gfp_mask, mpol, ilx, &page_allocated); if (!page) continue; if (page_allocated) { @@ -698,11 +703,13 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, } blk_finish_plug(&plug); swap_read_unplug(splug); - lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); + page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + &page_allocated); + if (unlikely(page_allocated)) + swap_readpage(page, false, NULL); #ifdef CONFIG_ZSWAP if (page) inc_nr_protected(page); @@ -805,8 +812,10 @@ static void swap_ra_info(struct vm_fault *vmf, /** * swap_vma_readahead - swap in pages in hope we need them soon - * @fentry: swap entry of this memory + * @targ_entry: swap entry of the targeted memory * @gfp_mask: memory allocation flags + * @mpol: NUMA memory allocation policy to be applied + * @targ_ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE * @vmf: fault information * * Returns the struct page for entry and addr, after queueing swapin. @@ -817,16 +826,17 @@ static void swap_ra_info(struct vm_fault *vmf, * Caller must hold read mmap_lock if vmf->vma is not NULL. * */ -static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, +static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t targ_ilx, struct vm_fault *vmf) { struct blk_plug plug; struct swap_iocb *splug = NULL; - struct vm_area_struct *vma = vmf->vma; struct page *page; pte_t *pte = NULL, pentry; unsigned long addr; swp_entry_t entry; + pgoff_t ilx; unsigned int i; bool page_allocated; struct vma_swap_readahead ra_info = { @@ -838,9 +848,10 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, goto skip; addr = vmf->address - (ra_info.offset * PAGE_SIZE); + ilx = targ_ilx - ra_info.offset; blk_start_plug(&plug); - for (i = 0; i < ra_info.nr_pte; i++, addr += PAGE_SIZE) { + for (i = 0; i < ra_info.nr_pte; i++, ilx++, addr += PAGE_SIZE) { if (!pte++) { pte = pte_offset_map(vmf->pmd, addr); if (!pte) @@ -854,8 +865,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, continue; pte_unmap(pte); pte = NULL; - page = __read_swap_cache_async(entry, gfp_mask, vma, - addr, &page_allocated); + page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + &page_allocated); if (!page) continue; if (page_allocated) { @@ -874,7 +885,10 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, lru_add_drain(); skip: /* The page was likely read above, so no need for plugging here */ - page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL); + page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, + &page_allocated); + if (unlikely(page_allocated)) + swap_readpage(page, false, NULL); #ifdef CONFIG_ZSWAP if (page) inc_nr_protected(page); @@ -897,9 +911,16 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf) { - return swap_use_vma_readahead() ? - swap_vma_readahead(entry, gfp_mask, vmf) : - swap_cluster_readahead(entry, gfp_mask, vmf); + struct mempolicy *mpol; + pgoff_t ilx; + struct page *page; + + mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); + page = swap_use_vma_readahead() ? + swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) : + swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + mpol_cond_put(mpol); + return page; } #ifdef CONFIG_SYSFS From patchwork Tue Oct 3 09:27:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147788 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1959320vqb; Tue, 3 Oct 2023 02:28:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGczgeb6tfs0/d4NKNzZwjCkl6runCjkvX/yJDdsg4UpSeQPUzf48s2dW0lQ3jdO74ykuGr X-Received: by 2002:a05:6871:b0d:b0:1dc:f4de:46b8 with SMTP id fq13-20020a0568710b0d00b001dcf4de46b8mr16844268oab.59.1696325290892; Tue, 03 Oct 2023 02:28:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696325290; cv=none; d=google.com; s=arc-20160816; b=LBZHt2kLGLHUHjV3q9vQm9FODlhIDZmYg2xowVF6Xhq2p6VllG/oHCQXguai6jWaQV tZXP2iGVyfVs4DeEBicMdenpLN8hdFo+nCOc43j0Fm3EDeryfjUE0Zkr1QF2PU6JbCc6 mmkxUzUuFNHeUpDDZHYpacCfg4g7q6HacCxK17bDPWNZH5nBtiBCsmmzzbePNUVv9aUi 977vw3Z4+5lj/REq6czlc0EcTQ3nfdwPLT9Pv+LYcfjLg1IGShkxUhwZdeKYoTpX9+kk jWp2LHx+/KfLNjtXc6dLjrl9sAGx9MCXtzyFFP2rlaCbfhEMVmM7QSOKay4XzMLGpNAl r9Vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=mU56FlmjTv5eFSUjfveUTDJtj2QxEcbZGZN96w5K2oc=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=IN4X1WT5MN/CJ3p+nPmjZRBqofm4twpny/GtzIoso9rK814+NdE/alSd+KoF5osEC7 8j6oQpeY05ZCddVCeOMqOCaXoy0qkZvzY/8EY8wLt/z4LHTh0tMrOxa30bgbXcZ9KbUj ctWJbCD1gFUYxCVD/iFJYgvHuiyO/8Env5Js0Hx7hpcMcJbtYh8yzPa6oPGdcj5NX/qH sOCN7PeOPlFUiDtrL26NrbE2lpH8X/bR2c9/4IbREkdc3YAwjt/hsYONtFZ/RsZdDdMa 9V8bnJk2D4ZHi/4UviCReAps7Xw+WbncWpt9ELn9JQALysPZRYPP3BOGisGli7gDBRZT YMLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=BuHxbeKq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id k66-20020a636f45000000b0056c2f508898si1004791pgc.725.2023.10.03.02.28.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:28:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=BuHxbeKq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 44F008028933; Tue, 3 Oct 2023 02:28:08 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239371AbjJCJ1z (ORCPT + 18 others); Tue, 3 Oct 2023 05:27:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231199AbjJCJ1y (ORCPT ); Tue, 3 Oct 2023 05:27:54 -0400 Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com [IPv6:2607:f8b0:4864:20::112b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5F94A9 for ; Tue, 3 Oct 2023 02:27:50 -0700 (PDT) Received: by mail-yw1-x112b.google.com with SMTP id 00721157ae682-59bebd5bdadso8858057b3.0 for ; Tue, 03 Oct 2023 02:27:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696325270; x=1696930070; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=mU56FlmjTv5eFSUjfveUTDJtj2QxEcbZGZN96w5K2oc=; b=BuHxbeKqUOKpgxYRhky6tQla1+pFFvDI9Je4UYWkWFXpDhnsHAezm3tmDUlqFkHicf CPxafyNxr6DoI6KpJRbTnE7f3VzenXgt6YJquC1Kz8olotjEh8nIhZsedDahknw/0JLp /Hu8WS31weOygGVXZnGCNg7L2Z9033z1ClkT+4jyVkX4jXk88gu5taj60a061nhRKJ7e GgyFqOT7uI3xaCKtLl6zBMfnb3pPi6bJpiwJuDn2R2AQrPxkJvF8z4sQtShXKr6R0AO6 dlYzAkzBwd2O4ACR6Qfs+vSKt6FiYgeU9/FOYb3uylBG7sR7SUJHPj0WRogW/lWdUltU vsQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696325270; x=1696930070; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mU56FlmjTv5eFSUjfveUTDJtj2QxEcbZGZN96w5K2oc=; b=wYSAaNeKv4huVgYVAiGMZ+FtvQ9gvQ+3q5d/9ZFjffUGchC6hHf8TwGsb3ZzyKGMEj BgMwBP/PhNpKszgb3gajNMHgwZ2U6ZexJ4cL7pj9FhGSPOEeG+edoYGpwxvYEBOB1zhO 1Nw3IQlWtvtdLv2VuSpr3tzdzxWolcsgqw7zqbyfJcx09zIrZvy+I3zFiSaaAu8BgVDv 43OT+aSj2Io3WZcfVMXLqC4YR9F5MLlfew/pEZdoNy81L8mh8wNpy1tfz0JqZzYo2EUN 3n7nnlEmrJqtxd50p9Cc7vIIcbh6z3H/J0RZgVx13Bh+b80PKZgg0L6Q4Cc1gzCKCboa iubA== X-Gm-Message-State: AOJu0YwVF0Eo9tDN6J8yY+4TRnpjyD8pgcagTEKlaSsyEbAyJv39chZC kdOCXNW8iVQZKs9p1ygYh/Q4/w== X-Received: by 2002:a0d:ddc1:0:b0:5a1:d4bc:7faa with SMTP id g184-20020a0dddc1000000b005a1d4bc7faamr14296514ywe.18.1696325269879; Tue, 03 Oct 2023 02:27:49 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id k6-20020a0dc806000000b0058038e6609csm256304ywd.74.2023.10.03.02.27.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:27:49 -0700 (PDT) Date: Tue, 3 Oct 2023 02:27:47 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 11/12] mempolicy: mmap_lock is not needed while migrating folios In-Reply-To: Message-ID: <21e564e8-269f-6a89-7ee2-fd612831c289@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:28:08 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778725988315490574 X-GMAIL-MSGID: 1778725988315490574 mbind(2) holds down_write of current task's mmap_lock throughout (exclusive because it needs to set the new mempolicy on the vmas); migrate_pages(2) holds down_read of pid's mmap_lock throughout. They both hold mmap_lock across the internal migrate_pages(), under which all new page allocations (huge or small) are made. I'm nervous about it; and migrate_pages() certainly does not need mmap_lock itself. It's done this way for mbind(2), because its page allocator is vma_alloc_folio() or alloc_hugetlb_folio_vma(), both of which depend on vma and address. Now that we have alloc_pages_mpol(), depending on (refcounted) memory policy and interleave index, mbind(2) can be modified to use that or alloc_hugetlb_folio_nodemask(), and then not need mmap_lock across the internal migrate_pages() at all: add alloc_migration_target_by_mpol() to replace mbind's new_page(). (After that change, alloc_hugetlb_folio_vma() is used by nothing but a userfaultfd function: move it out of hugetlb.h and into the #ifdef.) migrate_pages(2) has chosen its target node before migrating, so can continue to use the standard alloc_migration_target(); but let it take and drop mmap_lock just around migrate_to_node()'s queue_pages_range(): neither the node-to-node calculations nor the page migrations need it. It seems unlikely, but it is conceivable that some userspace depends on the kernel's mmap_lock exclusion here, instead of doing its own locking: more likely in a testsuite than in real life. It is also possible, of course, that some pages on the list will be munmapped by another thread before they are migrated, or a newer memory policy applied to the range by that time: but such races could happen before, as soon as mmap_lock was dropped, so it does not appear to be a concern. Signed-off-by: Hugh Dickins --- include/linux/hugetlb.h | 9 ----- mm/hugetlb.c | 38 ++++++++++---------- mm/mempolicy.c | 83 ++++++++++++++++++++++--------------------- 3 files changed, 63 insertions(+), 67 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index a574e26e18a2..7c6faee07b42 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -716,8 +716,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve); struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask, gfp_t gfp_mask); -struct folio *alloc_hugetlb_folio_vma(struct hstate *h, struct vm_area_struct *vma, - unsigned long address); int hugetlb_add_to_page_cache(struct folio *folio, struct address_space *mapping, pgoff_t idx); void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, @@ -1040,13 +1038,6 @@ alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid, return NULL; } -static inline struct folio *alloc_hugetlb_folio_vma(struct hstate *h, - struct vm_area_struct *vma, - unsigned long address) -{ - return NULL; -} - static inline int __alloc_bootmem_huge_page(struct hstate *h) { return 0; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9d5b7f208dac..68ff79061f88 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2458,24 +2458,6 @@ struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid, return alloc_migrate_hugetlb_folio(h, gfp_mask, preferred_nid, nmask); } -/* mempolicy aware migration callback */ -struct folio *alloc_hugetlb_folio_vma(struct hstate *h, struct vm_area_struct *vma, - unsigned long address) -{ - struct mempolicy *mpol; - nodemask_t *nodemask; - struct folio *folio; - gfp_t gfp_mask; - int node; - - gfp_mask = htlb_alloc_mask(h); - node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask); - mpol_cond_put(mpol); - - return folio; -} - /* * Increase the hugetlb pool such that it can accommodate a reservation * of size 'delta'. @@ -6279,6 +6261,26 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, } #ifdef CONFIG_USERFAULTFD +/* + * Can probably be eliminated, but still used by hugetlb_mfill_atomic_pte(). + */ +static struct folio *alloc_hugetlb_folio_vma(struct hstate *h, + struct vm_area_struct *vma, unsigned long address) +{ + struct mempolicy *mpol; + nodemask_t *nodemask; + struct folio *folio; + gfp_t gfp_mask; + int node; + + gfp_mask = htlb_alloc_mask(h); + node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); + folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask); + mpol_cond_put(mpol); + + return folio; +} + /* * Used by userfaultfd UFFDIO_* ioctls. Based on userfaultfd's mfill_atomic_pte * with modifications for hugetlb pages. diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8cf76de12acd..a7b34b9c00ef 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -417,6 +417,8 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist, unsigned long flags); +static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol, + pgoff_t ilx, int *nid); static bool strictly_unmovable(unsigned long flags) { @@ -1043,6 +1045,8 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest, node_set(source, nmask); VM_BUG_ON(!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))); + + mmap_read_lock(mm); vma = find_vma(mm, 0); /* @@ -1053,6 +1057,7 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest, */ nr_failed = queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask, flags | MPOL_MF_DISCONTIG_OK, &pagelist); + mmap_read_unlock(mm); if (!list_empty(&pagelist)) { err = migrate_pages(&pagelist, alloc_migration_target, NULL, @@ -1081,8 +1086,6 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, lru_cache_disable(); - mmap_read_lock(mm); - /* * Find a 'source' bit set in 'tmp' whose corresponding 'dest' * bit in 'to' is not also set in 'tmp'. Clear the found 'source' @@ -1162,7 +1165,6 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, if (err < 0) break; } - mmap_read_unlock(mm); lru_cache_enable(); if (err < 0) @@ -1171,44 +1173,38 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, } /* - * Allocate a new page for page migration based on vma policy. - * Start by assuming the page is mapped by the same vma as contains @start. - * Search forward from there, if not. N.B., this assumes that the - * list of pages handed to migrate_pages()--which is how we get here-- - * is in virtual address order. + * Allocate a new folio for page migration, according to NUMA mempolicy. */ -static struct folio *new_folio(struct folio *src, unsigned long start) +static struct folio *alloc_migration_target_by_mpol(struct folio *src, + unsigned long private) { - struct vm_area_struct *vma; - unsigned long address; - VMA_ITERATOR(vmi, current->mm, start); - gfp_t gfp = GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL; + struct mempolicy *pol = (struct mempolicy *)private; + pgoff_t ilx = 0; /* improve on this later */ + struct page *page; + unsigned int order; + int nid = numa_node_id(); + gfp_t gfp; - for_each_vma(vmi, vma) { - address = page_address_in_vma(&src->page, vma); - if (address != -EFAULT) - break; - } - - /* - * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL - * when the page can no longer be located in a vma: that is not ideal - * (migrate_pages() will give up early, presuming ENOMEM), but good - * enough to avoid a crash by syzkaller or concurrent holepunch. - */ - if (!vma) - return NULL; + order = folio_order(src); + ilx += src->index >> order; if (folio_test_hugetlb(src)) { - return alloc_hugetlb_folio_vma(folio_hstate(src), - vma, address); + nodemask_t *nodemask; + struct hstate *h; + + h = folio_hstate(src); + gfp = htlb_alloc_mask(h); + nodemask = policy_nodemask(gfp, pol, ilx, &nid); + return alloc_hugetlb_folio_nodemask(h, nid, nodemask, gfp); } if (folio_test_large(src)) gfp = GFP_TRANSHUGE; + else + gfp = GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL | __GFP_COMP; - return vma_alloc_folio(gfp, folio_order(src), vma, address, - folio_test_large(src)); + page = alloc_pages_mpol(gfp, order, pol, ilx, nid); + return page_rmappable_folio(page); } #else @@ -1224,7 +1220,8 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, return -ENOSYS; } -static struct folio *new_folio(struct folio *src, unsigned long start) +static struct folio *alloc_migration_target_by_mpol(struct folio *src, + unsigned long private) { return NULL; } @@ -1298,6 +1295,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (nr_failed < 0) { err = nr_failed; + nr_failed = 0; } else { vma_iter_init(&vmi, mm, start); prev = vma_prev(&vmi); @@ -1308,19 +1306,24 @@ static long do_mbind(unsigned long start, unsigned long len, } } - if (!err) { - if (!list_empty(&pagelist)) { - nr_failed |= migrate_pages(&pagelist, new_folio, NULL, - start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL); + mmap_write_unlock(mm); + + if (!err && !list_empty(&pagelist)) { + /* Convert MPOL_DEFAULT's NULL to task or default policy */ + if (!new) { + new = get_task_policy(current); + mpol_get(new); } - if (nr_failed && (flags & MPOL_MF_STRICT)) - err = -EIO; + nr_failed |= migrate_pages(&pagelist, + alloc_migration_target_by_mpol, NULL, + (unsigned long)new, MIGRATE_SYNC, + MR_MEMPOLICY_MBIND, NULL); } + if (nr_failed && (flags & MPOL_MF_STRICT)) + err = -EIO; if (!list_empty(&pagelist)) putback_movable_pages(&pagelist); - - mmap_write_unlock(mm); mpol_out: mpol_put(new); if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) From patchwork Tue Oct 3 09:29:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 147790 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp1959737vqb; Tue, 3 Oct 2023 02:29:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGcdVTwoM0noPqjf+XSU0gIFhSrUdPm9D3XabNbth5o/H6gdLcMAklTwhdrzOY7aqhfQw8S X-Received: by 2002:a05:6808:9b2:b0:3a7:2690:94e0 with SMTP id e18-20020a05680809b200b003a7269094e0mr13309465oig.4.1696325356288; Tue, 03 Oct 2023 02:29:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696325356; cv=none; d=google.com; s=arc-20160816; b=HynKHr2xKlTRqyb8/yVSPDYzf+H6pXxTxH53b659nWlCLe7yH+8PCn+0a7v89eFwoG ElX2gm7hk3jYiu4QLGqesGjWXnLR4Hw2iJ33txV2VS0UUCp2VIqC4BMmwThBcMvdzXMI cN641cQR2ymFTW+/YDV5PdzQIRcHqfGrgUfYeA/tgTtJVwAynJvRh2oahdOa4dQBSSRG PTOWTOXAVPr9Nj2bjwWerkG/qnQb/NofqUbG6gz917b1lArPBBpcK3uS0U1XgwA1qwCa 826sGagUZ/WpioHi2xYoaA55U66RsSD67mFNnqpJy8gOwyCFMVLPTtLQEDxSixERLUaQ Tt2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=wvvp9tOZAm0ZnFPspgZt0Lcdrx1YEcxNfvbw4PaEVR0=; fh=uua5dLukz2E+eiUjHQ3XQZ9nrmHo6GVFaq4ywUulylI=; b=AwGv1SItfspBetxSgVa7lafQbDEmQQ0wxJXWVspE2RH0F3LUGuztDqkFBfumHrOutK hy5894OFSpY1epUmZavWsZSWbAFbBsCz2g5gCgqV0eIuw7VNEQQFVG6Uyb7bjZOjO84q hRo+fiJ1ItAYolTEhiD86j6/Ivgx92UGtyApFR8Zp03QGE6SZxwrBV1HehpRmu/+eSkn NeCPL2cty6bhCoft6oRPuJLoT6Ixnyl116iFroout9adOaserf69IHEgTbDT6lGFQLFi GAu8gJfiRB7XpV0e0TeHp6i3UNX8d5BC6MhjfjscoWQTICya4ckeVxQHOni0ny/iZl6t gCYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ZtZrkFL2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id hk1-20020a17090b224100b0027920d6f85bsi1113797pjb.94.2023.10.03.02.29.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:29:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ZtZrkFL2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id A0371812E022; Tue, 3 Oct 2023 02:29:13 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239642AbjJCJ3H (ORCPT + 18 others); Tue, 3 Oct 2023 05:29:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231199AbjJCJ3G (ORCPT ); Tue, 3 Oct 2023 05:29:06 -0400 Received: from mail-yb1-xb2d.google.com (mail-yb1-xb2d.google.com [IPv6:2607:f8b0:4864:20::b2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0380890 for ; Tue, 3 Oct 2023 02:29:04 -0700 (PDT) Received: by mail-yb1-xb2d.google.com with SMTP id 3f1490d57ef6-d8198ca891fso718862276.1 for ; Tue, 03 Oct 2023 02:29:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696325343; x=1696930143; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=wvvp9tOZAm0ZnFPspgZt0Lcdrx1YEcxNfvbw4PaEVR0=; b=ZtZrkFL2E3gKWPgFwvtJN5irugGphkYJM3l1JM9chce5X9nwsdAaQaweSjv/KD3rlr aufCFAPD3iLna2Qo0Wwga4GLU6fetZFVEeSlM2Ad7sN3Lceu2KGaVV9kiXyUMMw0Ncnl wkb4LmLwbghKR2XagSW92Utkxd0Dmqnc1NzR6Aut7mf1eQ1GIVRTegvq4MiJZMWIPauh C4iBeTF0R0/a0ZFlDG2PTLO0kH+nD+4w4dg7gwsy+9bImpzn+RyLRhK7iF9ys4TDHoWL rD4ObStxgjESS93Gg77ZVD5Vqf1T09AeJEqIN2h2e3wfyVOhOhJcN+ye4s+/ZgkclRXh YBxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696325343; x=1696930143; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wvvp9tOZAm0ZnFPspgZt0Lcdrx1YEcxNfvbw4PaEVR0=; b=EV/XoizGSdK1RYyBWaeKGUWt7K/8WU6G1djnEsg2miuHRMuu1wExDhr9GRR4rq8vIa Gncz902bac106vNILxeJdd8foFd+bVMtBoxvUqX4EQ166VJzP1HOAhy1Xe+YlmlbEHgp ZrXRsRvxRz8byQpBh18NftqAWM8R6a3eAJXSLyLutowITGDUoaD2rw/BY0AlYyhb8gvm Pqcked/i8sPc/8tF7kqYHTnvxZ53DchOVHbhq7wBTtPz2mYtsf1G/s5eCZ2jrh+u/zhr cCvj105Y04VmpWL4vaC3Isee2cOQ+4MiUvTZzM11XXfS0OqQsBgEr10x9T2k/2Jx9ogK IHWQ== X-Gm-Message-State: AOJu0Yz0deBNob8I6OHA6zWhzzivQWfMO/tbu+FnID4wevxAcHIBNDzS nzJkBoV0ngP58BbA8JYlUz3yRw== X-Received: by 2002:a25:da4e:0:b0:d7a:d628:f69d with SMTP id n75-20020a25da4e000000b00d7ad628f69dmr13139965ybf.32.1696325343062; Tue, 03 Oct 2023 02:29:03 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id w196-20020a25c7cd000000b00d1b86efc0ffsm291695ybe.6.2023.10.03.02.29.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 02:29:02 -0700 (PDT) Date: Tue, 3 Oct 2023 02:29:00 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 12/12] mempolicy: migration attempt to match interleave nodes In-Reply-To: Message-ID: <77954a5-9c9b-1c11-7d5c-3262c01b895f@google.com> References: MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 03 Oct 2023 02:29:13 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778726056578667679 X-GMAIL-MSGID: 1778726056578667679 Improve alloc_migration_target_by_mpol()'s treatment of MPOL_INTERLEAVE. Make an effort in do_mbind(), to identify the correct interleave index for the first page to be migrated, so that it and all subsequent pages from the same vma will be targeted to precisely their intended nodes. Pages from following vmas will still be interleaved from the requested nodemask, but perhaps starting from a different base. Whether this is worth doing at all, or worth improving further, is arguable: queue_folio_required() is right not to care about the precise placement on interleaved nodes; but this little effort seems appropriate. Signed-off-by: Hugh Dickins --- mm/mempolicy.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 46 insertions(+), 3 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a7b34b9c00ef..b01922e88548 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -430,6 +430,11 @@ static bool strictly_unmovable(unsigned long flags) MPOL_MF_STRICT; } +struct migration_mpol { /* for alloc_migration_target_by_mpol() */ + struct mempolicy *pol; + pgoff_t ilx; +}; + struct queue_pages { struct list_head *pagelist; unsigned long flags; @@ -1178,8 +1183,9 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, static struct folio *alloc_migration_target_by_mpol(struct folio *src, unsigned long private) { - struct mempolicy *pol = (struct mempolicy *)private; - pgoff_t ilx = 0; /* improve on this later */ + struct migration_mpol *mmpol = (struct migration_mpol *)private; + struct mempolicy *pol = mmpol->pol; + pgoff_t ilx = mmpol->ilx; struct page *page; unsigned int order; int nid = numa_node_id(); @@ -1234,6 +1240,7 @@ static long do_mbind(unsigned long start, unsigned long len, struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; struct vma_iterator vmi; + struct migration_mpol mmpol; struct mempolicy *new; unsigned long end; long err; @@ -1314,9 +1321,45 @@ static long do_mbind(unsigned long start, unsigned long len, new = get_task_policy(current); mpol_get(new); } + mmpol.pol = new; + mmpol.ilx = 0; + + /* + * In the interleaved case, attempt to allocate on exactly the + * targeted nodes, for the first VMA to be migrated; for later + * VMAs, the nodes will still be interleaved from the targeted + * nodemask, but one by one may be selected differently. + */ + if (new->mode == MPOL_INTERLEAVE) { + struct page *page; + unsigned int order; + unsigned long addr = -EFAULT; + + list_for_each_entry(page, &pagelist, lru) { + if (!PageKsm(page)) + break; + } + if (!list_entry_is_head(page, &pagelist, lru)) { + vma_iter_init(&vmi, mm, start); + for_each_vma_range(vmi, vma, end) { + addr = page_address_in_vma(page, vma); + if (addr != -EFAULT) + break; + } + } + if (addr != -EFAULT) { + order = compound_order(page); + /* We already know the pol, but not the ilx */ + mpol_cond_put(get_vma_policy(vma, addr, order, + &mmpol.ilx)); + /* Set base from which to increment by index */ + mmpol.ilx -= page->index >> order; + } + } + nr_failed |= migrate_pages(&pagelist, alloc_migration_target_by_mpol, NULL, - (unsigned long)new, MIGRATE_SYNC, + (unsigned long)&mmpol, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL); }