From patchwork Mon Jul 10 20:02:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 118065 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp52583vqm; Mon, 10 Jul 2023 13:23:22 -0700 (PDT) X-Google-Smtp-Source: APBJJlFuC78erowK5WGdhqGKJXAvNpcJInXyc/ZK23PJLvjsy5s1L4llt35UOu9lLQo8rJg1/g3x X-Received: by 2002:a05:6a20:104f:b0:12e:c481:7072 with SMTP id gt15-20020a056a20104f00b0012ec4817072mr13388033pzc.37.1689020602520; Mon, 10 Jul 2023 13:23:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689020602; cv=none; d=google.com; s=arc-20160816; b=E5PEBVCVrBV9rw+MI811mABYM4wXqCtcMxbBulYvQwtEJYKQcHCjAPww5oetIBcITi TCMkVP+3dxpCcgKFOrOBvcuDz0skhnJZnLEQYW5nUsWk+FunJN4zOBeihKVKfggra290 nfJjUBy25wz3D8+kk+JR54KNJGyWVRLXovc9lfU2fFxkWRahkPERik76k40zIuMJu8ZF ZCEpupXimxBhkpHQOOY3z2FDs1SoLdN3j/74WUXVIfQ9CyL7xvPl+tj+qNjVUMjMCRTU y1+HJ+nyh3AyERRzoHjD2X+SMq60MH0stcvd6QT2K2//sJOaTlYczReqbSaTyQ+kPMg9 NzDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OCk65ASeh7wDClpwcCTCNI7DegEVqjj7OV3cnMeK+so=; fh=2qe3N7DDtqyYBsCdd7MW49sdK3GHWS0XcQKfhTLp98U=; b=Rb1HB2bygMBm4wivnoub4ILux3NDFoP30g+bbX1GZWK/9xJ06mAb1fVGzYy4C3pWnj cBGpvJOILuPWWAa53jPQbi9iCRQTr1Zddjv1La3wG8v/3/us+WpsGylsYBY7IUL4JQyt lUo3AbFNVfte6NkSZNztfk45px4eRDX/2fzeCVq8d2ht0JbFS4Ss9uKMJ4k8qjJXEQuG 7G+liJtNsdlrlEt8cg6cvnpKMWGSBj7VPwKplh0Br5yV59gQEQGPhMjQEDTbsxEP1JXo NkfSDtT4jeleQNZl9cxUqfxdn3Q5tgj06j5+GUy/iYNeJTswYGn5KpKOGiEklvi9ssq2 22Tw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G78w785f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u12-20020a63140c000000b005346b8dae84si144113pgl.787.2023.07.10.13.23.08; Mon, 10 Jul 2023 13:23:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G78w785f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231334AbjGJUDI (ORCPT + 99 others); Mon, 10 Jul 2023 16:03:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230196AbjGJUDE (ORCPT ); Mon, 10 Jul 2023 16:03:04 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67A1313E; Mon, 10 Jul 2023 13:03:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689019383; x=1720555383; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8l42AXgYUGrnbuH+V3uVVyks9ctnrJPhFGxYqdNOUdU=; b=G78w785fgThtU2mApgLldaJwO2VAng+4mE8QmMGU6zlcExTe1gz0a6Ar 8u60WYY1lUnIb3SbP63fztzlZ10Wmh2C35HAA5EzntPV1fbxcP3AnI9CV fgPN92oDWyqqlJrHO5DXB84yhZBp7yf1Y3IuBc8Cel1kf+eb3vcebvgBb V50zHP9h6bWIJmUiLu5djLn1ESR3GZibn87tK4JEji1T33XGVkrTnHf2l 7mgYwT8wu5rxUjb20vaDlEDE3NcmlSjiZx2WUsLN8d0LnhF8uXxOCBKqQ i8nNltgKDktVmPYDE5/iCcBrGCGcxZDn/hgwis/gG9uiox0vnSOEd9ChJ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10767"; a="344764064" X-IronPort-AV: E=Sophos;i="6.01,195,1684825200"; d="scan'208";a="344764064" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2023 13:03:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10767"; a="714903582" X-IronPort-AV: E=Sophos;i="6.01,195,1684825200"; d="scan'208";a="714903582" Received: from aschofie-mobl2.amr.corp.intel.com (HELO localhost) ([10.209.93.201]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2023 13:03:01 -0700 From: alison.schofield@intel.com To: "Rafael J. Wysocki" , Len Brown , Dan Williams , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Jonathan Cameron , Dave Jiang , Mike Rapoport Cc: Alison Schofield , x86@kernel.org, linux-cxl@vger.kernel.org, linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, Derick Marks Subject: [PATCH v4 1/2] x86/numa: Introduce numa_fill_memblks() Date: Mon, 10 Jul 2023 13:02:58 -0700 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771066467413748883 X-GMAIL-MSGID: 1771066467413748883 From: Alison Schofield numa_fill_memblks() fills in the gaps in numa_meminfo memblks over an physical address range. The ACPI driver will use numa_fill_memblks() to implement a new Linux policy that prescribes extending proximity domains in a portion of a CFMWS window to the entire window. Dan Williams offered this explanation of the policy: A CFWMS is an ACPI data structure that indicates *potential* locations where CXL memory can be placed. It is the playground where the CXL driver has free reign to establish regions. That space can be populated by BIOS created regions, or driver created regions, after hotplug or other reconfiguration. When BIOS creates a region in a CXL Window it additionally describes that subset of the Window range in the other typical ACPI tables SRAT, SLIT, and HMAT. The rationale for BIOS not pre-describing the entire CXL Window in SRAT, SLIT, and HMAT is that it can not predict the future. I.e. there is nothing stopping higher or lower performance devices being placed in the same Window. Compare that to ACPI memory hotplug that just onlines additional capacity in the proximity domain with little freedom for dynamic performance differentiation. That leaves the OS with a choice, should unpopulated window capacity match the proximity domain of an existing region, or should it allocate a new one? This patch takes the simple position of minimizing proximity domain proliferation by reusing any proximity domain intersection for the entire Window. If the Window has no intersections then allocate a new proximity domain. Note that SRAT, SLIT and HMAT information can be enumerated dynamically in a standard way from device provided data. Think of CXL as the end of ACPI needing to describe memory attributes, CXL offers a standard discovery model for performance attributes, but Linux still needs to interoperate with the old regime. Reported-by: Derick Marks Suggested-by: Dan Williams Signed-off-by: Alison Schofield Reviewed-by: Dan Williams Tested-by: Derick Marks --- arch/x86/include/asm/sparsemem.h | 2 + arch/x86/mm/numa.c | 80 ++++++++++++++++++++++++++++++++ include/linux/numa.h | 7 +++ 3 files changed, 89 insertions(+) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 64df897c0ee3..1be13b2dfe8b 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -37,6 +37,8 @@ extern int phys_to_target_node(phys_addr_t start); #define phys_to_target_node phys_to_target_node extern int memory_add_physaddr_to_nid(u64 start); #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +extern int numa_fill_memblks(u64 start, u64 end); +#define numa_fill_memblks numa_fill_memblks #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 2aadb2019b4f..c01c5506fd4a 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -961,4 +962,83 @@ int memory_add_physaddr_to_nid(u64 start) return nid; } EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); + +static int __init cmp_memblk(const void *a, const void *b) +{ + const struct numa_memblk *ma = *(const struct numa_memblk **)a; + const struct numa_memblk *mb = *(const struct numa_memblk **)b; + + return ma->start - mb->start; +} + +static struct numa_memblk *numa_memblk_list[NR_NODE_MEMBLKS] __initdata; + +/** + * numa_fill_memblks - Fill gaps in numa_meminfo memblks + * @start: address to begin fill + * @end: address to end fill + * + * Find and extend numa_meminfo memblks to cover the @start-@end + * physical address range, such that the first memblk includes + * @start, the last memblk includes @end, and any gaps in between + * are filled. + * + * RETURNS: + * 0 : Success + * NUMA_NO_MEMBLK : No memblk exists in @start-@end range + */ + +int __init numa_fill_memblks(u64 start, u64 end) +{ + struct numa_memblk **blk = &numa_memblk_list[0]; + struct numa_meminfo *mi = &numa_meminfo; + int count = 0; + u64 prev_end; + + /* + * Create a list of pointers to numa_meminfo memblks that + * overlap start, end. Exclude (start == bi->end) since + * end addresses in both a CFMWS range and a memblk range + * are exclusive. + * + * This list of pointers is used to make in-place changes + * that fill out the numa_meminfo memblks. + */ + for (int i = 0; i < mi->nr_blks; i++) { + struct numa_memblk *bi = &mi->blk[i]; + + if (start < bi->end && end >= bi->start) { + blk[count] = &mi->blk[i]; + count++; + } + } + if (!count) + return NUMA_NO_MEMBLK; + + /* Sort the list of pointers in memblk->start order */ + sort(&blk[0], count, sizeof(blk[0]), cmp_memblk, NULL); + + /* Make sure the first/last memblks include start/end */ + blk[0]->start = min(blk[0]->start, start); + blk[count - 1]->end = max(blk[count - 1]->end, end); + + /* + * Fill any gaps by tracking the previous memblks + * end address and backfilling to it if needed. + */ + prev_end = blk[0]->end; + for (int i = 1; i < count; i++) { + struct numa_memblk *curr = blk[i]; + + if (prev_end >= curr->start) { + if (prev_end < curr->end) + prev_end = curr->end; + } else { + curr->start = prev_end; + prev_end = curr->end; + } + } + return 0; +} + #endif diff --git a/include/linux/numa.h b/include/linux/numa.h index 59df211d051f..0f512c0aba54 100644 --- a/include/linux/numa.h +++ b/include/linux/numa.h @@ -12,6 +12,7 @@ #define MAX_NUMNODES (1 << NODES_SHIFT) #define NUMA_NO_NODE (-1) +#define NUMA_NO_MEMBLK (-1) /* optionally keep NUMA memory info available post init */ #ifdef CONFIG_NUMA_KEEP_MEMINFO @@ -43,6 +44,12 @@ static inline int phys_to_target_node(u64 start) return 0; } #endif +#ifndef numa_fill_memblks +static inline int __init numa_fill_memblks(u64 start, u64 end) +{ + return NUMA_NO_MEMBLK; +} +#endif #else /* !CONFIG_NUMA */ static inline int numa_map_to_online_node(int node) {