From patchwork Wed Jun 14 04:35:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 107701 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1007028vqr; Tue, 13 Jun 2023 22:12:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ56tqqyrgAaXxY97l7PSJyO1AjSc1MzUZlg2qys2QzyV1PEXVVoDM5X9z973433ug7DGC7A X-Received: by 2002:a17:907:360c:b0:96a:48ed:531d with SMTP id bk12-20020a170907360c00b0096a48ed531dmr15983866ejc.27.1686719538504; Tue, 13 Jun 2023 22:12:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686719538; cv=none; d=google.com; s=arc-20160816; b=waHFVxMMRe/pTbyO+YRuIRTsp7Ezm3a1UQodvBlunSC2qMzxdFdPVLiBEBmA4febcw +a0gIH711cgxasWSsYQci/hMhqMZbPdFcewpQHAaexqt5mxFbWsyzZy02MIxgHduanc8 v/KXohsTnX4cJ/h8BJaczn9UWkdCin2l/ODhagjSphjiv8htd4QdjJWUGdD0FeItZLC3 /MvRajcYgLkvieib7Khu8bArI20NtsnvhzOvZuQmuuZ0Sl5XyRjTv49zmnOH5ssZNXRs qB3w88ewYs6g6aLSVEUoJx8g4M/qi6I3aWwPLl61yANdYhwOcZr6yiyhjyB46VmZkt5D fRow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XJl/pyU3bcVz7Mir6r/oGP/2U7tQ4tlCMo/3A6GNc7w=; b=jNsIUM2b3Iiv02VyykTSpdgZZ6242fNQ/WF4Zawv3uACjgvJEFHcuP2F6W2SZNoDJ7 X9vYwBeGUNIdL4ZIt8yOv4oDdlnPLtiOLEBKiCQxfGUAIGHBolyivN5YTk35VXFR/wIZ ByYMeJwKEmb+b33H8bjKurQkT+HIm7rdNLceVvohlTGNgMM8RfZpDX4CL0Yw56Mi0/2H f2/y5UGo8pLVhOL5HoSTa0glfHe3H74dS1Z61BnQ3cXWQyQbERs/4kCSBzic8+wKQ8fg gNrOZiv2S4Rkntkb2KBpW4mlOgPJwQQ3yhGZBxsA/6mq29Z63ePlltZf5WZZaaJwlrkf 8XRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AHuIaucr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ka11-20020a170907990b00b00978ae651cacsi8183941ejc.0.2023.06.13.22.11.42; Tue, 13 Jun 2023 22:12:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AHuIaucr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242549AbjFNEgB (ORCPT + 99 others); Wed, 14 Jun 2023 00:36:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242521AbjFNEfy (ORCPT ); Wed, 14 Jun 2023 00:35:54 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E197B1BD2; Tue, 13 Jun 2023 21:35:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686717353; x=1718253353; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TKNOPiRlbFCjVKO/lr2yXKjZVNi5xSBLYkk5nEvGDe4=; b=AHuIaucrSZtTV2Ii6ttEOD80ZL/oIzhlW0/BqF6GDhK/Po50YHi9ruUn j4gt+rC7ypBR2ftwoHawG7Se9eFbg8+LvisDrZY1CtMPqqEAaHogh8udW BM+TP80RuigkSy0SF1o4BiixyC542MD4hCedmoTEJ1MPvRG1Y1zoIXbDq lhAqFW75GH8BX7ScwPmLAvXSOA2fIa9H8FYoLac/3HbrcYCpo72v1Zr+1 fSEtUIe/E0jMVhvJ35VtxskH5ibzTWCJ1p7ZMyze84IzONwF4iIVZ8Gm/ iDFkYmA1UUp9D+C3WuO4YmcFdzvv1VtCcWdPSluGHvt+re1/1ApDw6wll A==; X-IronPort-AV: E=McAfee;i="6600,9927,10740"; a="360998741" X-IronPort-AV: E=Sophos;i="6.00,241,1681196400"; d="scan'208";a="360998741" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2023 21:35:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10740"; a="662251417" X-IronPort-AV: E=Sophos;i="6.00,241,1681196400"; d="scan'208";a="662251417" Received: from aschofie-mobl2.amr.corp.intel.com (HELO localhost) ([10.212.233.239]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2023 21:35:28 -0700 From: alison.schofield@intel.com To: "Rafael J. Wysocki" , Len Brown , Dan Williams , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Jonathan Cameron , Dave Jiang , Mike Rapoport Cc: Alison Schofield , x86@kernel.org, linux-cxl@vger.kernel.org, linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, Derick Marks Subject: [PATCH v2 1/2] x86/numa: Introduce numa_fill_memblks() Date: Tue, 13 Jun 2023 21:35:24 -0700 Message-Id: <9fcc548a6b4727cb2538e5227d7bad2e94e6adaf.1686712819.git.alison.schofield@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768653626556773548?= X-GMAIL-MSGID: =?utf-8?q?1768653626556773548?= From: Alison Schofield numa_fill_memblks() fills in the gaps in numa_meminfo memblks over an HPA address range. The ACPI driver will use numa_fill_memblks() to implement a new Linux policy that prescribes extending proximity domains in a portion of a CFMWS window to the entire window. Dan Williams offered this explanation of the policy: A CFWMS is an ACPI data structure that indicates *potential* locations where CXL memory can be placed. It is the playground where the CXL driver has free reign to establish regions. That space can be populated by BIOS created regions, or driver created regions, after hotplug or other reconfiguration. When BIOS creates a region in a CXL Window it additionally describes that subset of the Window range in the other typical ACPI tables SRAT, SLIT, and HMAT. The rationale for BIOS not pre-describing the entire CXL Window in SRAT, SLIT, and HMAT is that it can not predict the future. I.e. there is nothing stopping higher or lower performance devices being placed in the same Window. Compare that to ACPI memory hotplug that just onlines additional capacity in the proximity domain with little freedom for dynamic performance differentiation. That leaves the OS with a choice, should unpopulated window capacity match the proximity domain of an existing region, or should it allocate a new one? This patch takes the simple position of minimizing proximity domain proliferation by reusing any proximity domain intersection for the entire Window. If the Window has no intersections then allocate a new proximity domain. Note that SRAT, SLIT and HMAT information can be enumerated dynamically in a standard way from device provided data. Think of CXL as the end of ACPI needing to describe memory attributes, CXL offers a standard discovery model for performance attributes, but Linux still needs to interoperate with the old regime. Reported-by: Derick Marks Suggested-by: Dan Williams Signed-off-by: Alison Schofield Tested-by: Derick Marks --- arch/x86/include/asm/sparsemem.h | 2 + arch/x86/mm/numa.c | 87 ++++++++++++++++++++++++++++++++ include/linux/numa.h | 7 +++ 3 files changed, 96 insertions(+) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 64df897c0ee3..1be13b2dfe8b 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -37,6 +37,8 @@ extern int phys_to_target_node(phys_addr_t start); #define phys_to_target_node phys_to_target_node extern int memory_add_physaddr_to_nid(u64 start); #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +extern int numa_fill_memblks(u64 start, u64 end); +#define numa_fill_memblks numa_fill_memblks #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 2aadb2019b4f..fa82141d1a04 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -961,4 +962,90 @@ int memory_add_physaddr_to_nid(u64 start) return nid; } EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); + +static int __init cmp_memblk(const void *a, const void *b) +{ + const struct numa_memblk *ma = *(const struct numa_memblk **)a; + const struct numa_memblk *mb = *(const struct numa_memblk **)b; + + if (ma->start != mb->start) + return (ma->start < mb->start) ? -1 : 1; + + /* Caller handles duplicate start addresses */ + return 0; +} + +static struct numa_memblk *numa_memblk_list[NR_NODE_MEMBLKS] __initdata; + +/** + * numa_fill_memblks - Fill gaps in numa_meminfo memblks + * @start: address to begin fill + * @end: address to end fill + * + * Find and extend numa_meminfo memblks to cover the @start-@end + * HPA address range, such that the first memblk includes @start, + * the last memblk includes @end, and any gaps in between are + * filled. + * + * RETURNS: + * 0 : Success + * NUMA_NO_MEMBLK : No memblk exists in @start-@end range + */ + +int __init numa_fill_memblks(u64 start, u64 end) +{ + struct numa_memblk **blk = &numa_memblk_list[0]; + struct numa_meminfo *mi = &numa_meminfo; + int count = 0; + u64 prev_end; + + /* + * Create a list of pointers to numa_meminfo memblks that + * overlap start, end. Exclude (start == bi->end) since + * end addresses in both a CFMWS range and a memblk range + * are exclusive. + * + * This list of pointers is used to make in-place changes + * that fill out the numa_meminfo memblks. + */ + for (int i = 0; i < mi->nr_blks; i++) { + struct numa_memblk *bi = &mi->blk[i]; + + if (start < bi->end && end >= bi->start) { + blk[count] = &mi->blk[i]; + count++; + } + } + if (!count) + return NUMA_NO_MEMBLK; + + /* Sort the list of pointers in memblk->start order */ + sort(&blk[0], count, sizeof(blk[0]), cmp_memblk, NULL); + + /* Make sure the first/last memblks include start/end */ + blk[0]->start = min(blk[0]->start, start); + blk[count - 1]->end = max(blk[count - 1]->end, end); + + /* + * Fill any gaps by tracking the previous memblks end address, + * prev_end, and backfilling to it if needed. Avoid filling + * overlapping memblks by making prev_end monotonically non- + * decreasing. + */ + prev_end = blk[0]->end; + for (int i = 1; i < count; i++) { + struct numa_memblk *curr = blk[i]; + + if (prev_end >= curr->start) { + if (prev_end < curr->end) + prev_end = curr->end; + } else { + curr->start = prev_end; + prev_end = curr->end; + } + } + return 0; +} +EXPORT_SYMBOL_GPL(numa_fill_memblks); + #endif diff --git a/include/linux/numa.h b/include/linux/numa.h index 59df211d051f..0f512c0aba54 100644 --- a/include/linux/numa.h +++ b/include/linux/numa.h @@ -12,6 +12,7 @@ #define MAX_NUMNODES (1 << NODES_SHIFT) #define NUMA_NO_NODE (-1) +#define NUMA_NO_MEMBLK (-1) /* optionally keep NUMA memory info available post init */ #ifdef CONFIG_NUMA_KEEP_MEMINFO @@ -43,6 +44,12 @@ static inline int phys_to_target_node(u64 start) return 0; } #endif +#ifndef numa_fill_memblks +static inline int __init numa_fill_memblks(u64 start, u64 end) +{ + return NUMA_NO_MEMBLK; +} +#endif #else /* !CONFIG_NUMA */ static inline int numa_map_to_online_node(int node) { From patchwork Wed Jun 14 04:35:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 107697 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp998556vqr; Tue, 13 Jun 2023 21:49:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7rRx1va4ZChDZOjOnvEI71CT99AsVWfCMyvRFnTeeEhuNQU0qaRVHY6BQOoQC1mRIsVsxu X-Received: by 2002:a17:906:fd8e:b0:974:1eeb:1ab7 with SMTP id xa14-20020a170906fd8e00b009741eeb1ab7mr15461126ejb.24.1686718163131; Tue, 13 Jun 2023 21:49:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686718163; cv=none; d=google.com; s=arc-20160816; b=Mh4AcHdApnCDt6Lm+S2fGw/eAXxlza1izgrUClo+a5AZmuL//F+/Br+DWN0H1CMLGk SS7lY4fOfLNHpOOkNPyfNZxpM3WZoWLkN96tT8c+fKItoXUmUZNtw+lr9CLrHIXkI0LD vpSK3a5wGlZ16kmiTE77M+7eV5IxVYpH2UxltiC/Ua0I/aRFpGq1R/XS8QAawBOYkhpn l08zGTLezjJJlt4dBFoKAbXA8tmvHqVF4OTvFDUVx14QdZNtsqzTWzr/XMPqpN313v59 gjR0jgZ5Bhsq95vyH0VOpnMal40nRxObgtPs3e6PU23M0TB2wtx2yJl3ihYkphO9fX9W z2mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eRcPKcCGOv2lPegigF657elurWR0D8xp6O3sZX/gg68=; b=p3URuvkip9FD3O1CUrWaXQ5MILJ4TiZ42IY87I8LbrEl9ag3IJI85ewzO4ju3xzHHL KDMpQPDgvF6jJp+gq3CKX7MJMc/N+qPc/lVMIILA5+wXxdbvU4pRemeOWy3+35xImxMx 0yDdItgaZHzs9DXeAyhfq0wD1V0IvH9CKarGIfyK/9qdN74xhGwPHNRABkZhcwAx+U9X n7d5AUMTn3ShV3azbMBq9PGsM0nAiM4zdcGssYljSxLWkXZryWZ+YC4bHxG7uWFIe20G /aYDrziz318BDYwFyx0a8A7Exnq37h9jE7ib/qSgt5Tn/R/0rRed80bPtCKy9Ou0i03o 0lIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="MBBAxp/i"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lr15-20020a170906fb8f00b00977c87a5ab3si7933568ejb.854.2023.06.13.21.48.28; Tue, 13 Jun 2023 21:49:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="MBBAxp/i"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242585AbjFNEgH (ORCPT + 99 others); Wed, 14 Jun 2023 00:36:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242523AbjFNEf5 (ORCPT ); Wed, 14 Jun 2023 00:35:57 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A24231BD4; Tue, 13 Jun 2023 21:35:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686717355; x=1718253355; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pExp2WMXGyNpxCyGDYOXewD2SNqta6zQBIAwkyT6MEY=; b=MBBAxp/iwb5IyinxAVKMqxiEJT09Z7M9iVRaoRNqvWvcF/+GCdzScyYX jv15em1L6PK6Pl+xQJvzfirrJwbl51yq+3MULo2j3Yh9qVsgL94B4Wo4Z ByFfKfT5zsUvgbGwsCvUu8Pd74dWwUPrb4u3F7MfJYz7cr4d1/K8STeGK 8WynePFqffh4SCm5iVddZGzVJklqlwIIzwMSfG2KxFWOK7u7tLFFPg33i +kx93a4FEc4+ekLS5sknDSR+bufHWBMq3GBAKW84fIim57qqTXLqtphOU SwokxY3phf/HMUxeGthXChFZll7Uv8aFp+KfR/FQKqOH1br1osoUYefdA Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10740"; a="360998751" X-IronPort-AV: E=Sophos;i="6.00,241,1681196400"; d="scan'208";a="360998751" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2023 21:35:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10740"; a="662251423" X-IronPort-AV: E=Sophos;i="6.00,241,1681196400"; d="scan'208";a="662251423" Received: from aschofie-mobl2.amr.corp.intel.com (HELO localhost) ([10.212.233.239]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2023 21:35:30 -0700 From: alison.schofield@intel.com To: "Rafael J. Wysocki" , Len Brown , Dan Williams , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Jonathan Cameron , Dave Jiang , Mike Rapoport Cc: Alison Schofield , x86@kernel.org, linux-cxl@vger.kernel.org, linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, Derick Marks Subject: [PATCH v2 2/2] ACPI: NUMA: Apply SRAT proximity domain to entire CFMWS window Date: Tue, 13 Jun 2023 21:35:25 -0700 Message-Id: <2871681bbe6aeac8a5d8f197d6f21749da9d75d7.1686712819.git.alison.schofield@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768652184707083465?= X-GMAIL-MSGID: =?utf-8?q?1768652184707083465?= From: Alison Schofield Commit fd49f99c1809 ("ACPI: NUMA: Add a node and memblk for each CFMWS not in SRAT") did not account for the case where the BIOS only partially describes a CFMWS Window in the SRAT. That means the omitted address ranges, of a partially described CFMWS Window, do not get assigned to a NUMA node. Replace the call to phys_to_target_node() with numa_add_memblks(). Numa_add_memblks() searches an HPA range for existing memblk(s) and extends those memblk(s) to fill the entire CFMWS Window. Extending the existing memblks is a simple strategy that reuses SRAT defined proximity domains from part of a window to fill out the entire window, based on the knowledge* that all of a CFMWS window is of a similar performance class. *Note that this heuristic will evolve when CFMWS Windows present a wider range of characteristics. The extension of the proximity domain, implemented here, is likely a step in developing a more sophisticated performance profile in the future. There is no change in behavior when the SRAT does not describe the CFMWS Window at all. In that case, a new NUMA node with a single memblk covering the entire CFMWS Window is created. Fixes: fd49f99c1809 ("ACPI: NUMA: Add a node and memblk for each CFMWS not in SRAT") Reported-by: Derick Marks Suggested-by: Dan Williams Signed-off-by: Alison Schofield Tested-by: Derick Marks --- drivers/acpi/numa/srat.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c index 1f4fc5f8a819..12f330b0eac0 100644 --- a/drivers/acpi/numa/srat.c +++ b/drivers/acpi/numa/srat.c @@ -310,11 +310,16 @@ static int __init acpi_parse_cfmws(union acpi_subtable_headers *header, start = cfmws->base_hpa; end = cfmws->base_hpa + cfmws->window_size; - /* Skip if the SRAT already described the NUMA details for this HPA */ - node = phys_to_target_node(start); - if (node != NUMA_NO_NODE) + /* + * The SRAT may have already described NUMA details for all, + * or a portion of, this CFMWS HPA range. Extend the memblks + * found for any portion of the window to cover the entire + * window. + */ + if (!numa_fill_memblks(start, end)) return 0; + /* No SRAT description. Create a new node. */ node = acpi_map_pxm_to_node(*fake_pxm); if (node == NUMA_NO_NODE) {