From patchwork Thu Mar 30 11:49:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77159 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1069782vqo; Thu, 30 Mar 2023 05:01:41 -0700 (PDT) X-Google-Smtp-Source: AKy350abheyiUIN2+xDUkxdSvhAF5Q1xDi1bW1EE9wImjrZltgLNzyWjWB0userOZ9VI9A/6THO6 X-Received: by 2002:a17:90b:1957:b0:240:5c46:e9b0 with SMTP id nk23-20020a17090b195700b002405c46e9b0mr5765572pjb.2.1680177701107; Thu, 30 Mar 2023 05:01:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177701; cv=none; d=google.com; s=arc-20160816; b=stOq+jSeDcBPdPjKf/HlgW3E2KjsGg8uJotw39992DzhsXoS1CBL82ZTZtCl79khyR pECG7IP8dSDXDt+rrvtVDobenTJLZOuxPDkKz2C/zUoodnwZ+1Vq41zkIV6C2h9qDoMP dvAtl/CoTqTYDr0VT3TDdknf2o1PTuECcubzn+1moyhqrDnkeFrzS+UAujg/dpk1V7vJ ZR6G74QHg33Me2uXbB3dpi9H7eq/vQoTt5XbnIzyFyfT9ecKv73BWwv13apqW5KJrpHt HNwbNGnMAu12X6wxd9Q+Vhr5P4kXgAmcHsQZIAJvzJ8wMpeTM8SEA2FKNkRfK/B7LugO 6cOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fY9Gq1C+euQfN8HX4Dp9v5W//o61MTHQI2rmCiyIkTM=; b=MSb1FNsFNBM2mJDrAXgpsYfO/tPOeoy4jP+R0F41AjSwuy4I31bVcPMPFGObqz9Ypa dhPxom7w6T+Pjkd3gT21A0mdnZr4uId9tH/GRJS0xDhjg2r2booYhRsxgjKPR1+5M+9M 00BwYhn1+clVa5lfeRzvgjQPucx+Ls6K80vA92nP13WP1XpAB4hi0Ny92OIFpgS46egK sETdrLs6b/zrkS+1uPQ6VYgHf9N5Jr2UMGCBR9HToHuoOf1Qzb39Py0hnc7VwTKh/yJh dS4HLcWorGOVtFkCj+BkopZSV5yYlnQeVKgG3OJ4gGO+OJB/aLOmy+UO0BLObe2gRZyv p03Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PwZNdpyn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p10-20020a17090a868a00b00233e301c780si3988648pjn.31.2023.03.30.05.01.21; Thu, 30 Mar 2023 05:01:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PwZNdpyn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231449AbjC3LuZ (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231389AbjC3LuU (ORCPT ); Thu, 30 Mar 2023 07:50:20 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A515597; Thu, 30 Mar 2023 04:50:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177015; x=1711713015; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bJTThgoaEyMmVXYNTmAj3X3A0F/zyVZqqh7eNmMOpAY=; b=PwZNdpynz1oHJnipIue42EO8/xdm+LpP0AuQT5t0/y6PXPgTGVy2Vzal qQhzW7b/r2+WbjNOUvBPWuDCM3boCQ3Z10my3hMvnbbOphyeHtos6qrvz wEDAF2N5k5e5uHN6AlZhK6W1Bh4CuBO5rOrfPnr8wH+XuEZ4V2FUjMWf/ Ib0iSOqDg3WedrxF8Fg7bpKnEoJ97GLAXxRDDoJgEj82ZMoUREveGCzQx fCMHKnZ8J2QD0njcC90cZzcoewVhX+s+REWOG2OIN+gMIuRnNa70zfZod kJwhT3rUt7XCAY/CG9k+KnLNUx3hYlFg3IZp0+LUrKCmnaf22y2/gt2+q A==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="339868390" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="339868390" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="1014401427" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="1014401427" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:02 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id E46BE10438C; Thu, 30 Mar 2023 14:49:59 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Mike Rapoport , Dave Hansen Subject: [PATCHv9 01/14] x86/boot: Centralize __pa()/__va() definitions Date: Thu, 30 Mar 2023 14:49:43 +0300 Message-Id: <20230330114956.20342-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761794013191794401?= X-GMAIL-MSGID: =?utf-8?q?1761794013191794401?= Replace multiple __pa()/__va() definitions with a single one in misc.h. Signed-off-by: Kirill A. Shutemov Reviewed-by: David Hildenbrand Reviewed-by: Mike Rapoport Reviewed-by: Dave Hansen --- arch/x86/boot/compressed/ident_map_64.c | 8 -------- arch/x86/boot/compressed/misc.h | 9 +++++++++ arch/x86/boot/compressed/sev.c | 2 -- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index 321a5011042d..bcc956c17872 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -8,14 +8,6 @@ * Copyright (C) 2016 Kees Cook */ -/* - * Since we're dealing with identity mappings, physical and virtual - * addresses are the same, so override these defines which are ultimately - * used by the headers in misc.h. - */ -#define __pa(x) ((unsigned long)(x)) -#define __va(x) ((void *)((unsigned long)(x))) - /* No PAGE_TABLE_ISOLATION support needed either: */ #undef CONFIG_PAGE_TABLE_ISOLATION diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h index 20118fb7c53b..2f155a0e3041 100644 --- a/arch/x86/boot/compressed/misc.h +++ b/arch/x86/boot/compressed/misc.h @@ -19,6 +19,15 @@ /* cpu_feature_enabled() cannot be used this early */ #define USE_EARLY_PGTABLE_L5 +/* + * Boot stub deals with identity mappings, physical and virtual addresses are + * the same, so override these defines. + * + * will not define them if they are already defined. + */ +#define __pa(x) ((unsigned long)(x)) +#define __va(x) ((void *)((unsigned long)(x))) + #include #include #include diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index d63ad8f99f83..014b89c89088 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -104,9 +104,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt, } #undef __init -#undef __pa #define __init -#define __pa(x) ((unsigned long)(x)) #define __BOOT_COMPRESSED From patchwork Thu Mar 30 11:49:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77148 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1065600vqo; Thu, 30 Mar 2023 04:53:55 -0700 (PDT) X-Google-Smtp-Source: AKy350Ys5Y2bKc3Vxb/xhmnTbdydSYZkcbB/vJ8ACJCRTzGpr8xeGe94WWMuidfC5rdrP2g3NkNH X-Received: by 2002:a17:90b:3a8f:b0:240:3ee4:d2d1 with SMTP id om15-20020a17090b3a8f00b002403ee4d2d1mr25035504pjb.13.1680177235438; Thu, 30 Mar 2023 04:53:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177235; cv=none; d=google.com; s=arc-20160816; b=z6qo2HhUKZJZJhFzZV108rPLNFFLAglpsVr7AOtfo/Gfq1qrpKpvFn95tZPhFe9QF2 hn30gzGQcJpQsMzwY8uaAjVJz/Sm5Qorw85sJctyUoULAONPd9y/x4ADk7QeFaqr1qcZ ZC3XrKuHEd84zsZ8EtYTt/8mC0uiQv240c4WIIxz3/QlCljNuU8B+UkEtl+uuZDpKEze eNcwxkiS3FunaVNBTPhnWY6cAmczpIItPLQx1jnDxeEgl7vbGyO1+lSGIx+Ug6UBx5cS PefkawzA0Ubewk50Am+LAbLUL7TRhGdYAHwx4bOsovRG1mJhgJVV/NIevvw4fwHclrhP bdPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=adaCvjbv4hhW96WqvWDJJxIi6tKE67/FRBSOoiO5XGo=; b=aybU/ca/YBoCNRKsVqeGLGr9+iEgFd4IotwH+hHyM/9j3q38qICZdggt2RXIG/aqQo pvUszOiVt6a6LUp+zEDhjtso/zc3Xq5SK/8wB7sBTC7TSx13HqjuqSyuJEuPqBdPFOrk i91VWIGsAU7JBo1njx0dshL+qs4gSuXbyWPqxJemUajO/AW0eM70aaoWAI/6Y22AjmAo i4eWncHwNLmfwEPhUGRMsHrd6hYR5d2oeM0gr/L6Kmk5+0O2YkDG6jQh+8h4xaB0Tl+I gEfEGdt0ahuT5mYFU8kYKfpnkK1ohlyULhhucrUMOkJqBHWmdY0AAxfZY/4bmKzRaiYN tCtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=j3MkSlPh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ot5-20020a17090b3b4500b002335ea8726dsi4129690pjb.88.2023.03.30.04.53.42; Thu, 30 Mar 2023 04:53:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=j3MkSlPh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231321AbjC3LuR (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231159AbjC3LuP (ORCPT ); Thu, 30 Mar 2023 07:50:15 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F92F2108; Thu, 30 Mar 2023 04:50:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177011; x=1711713011; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZA6Ow13DZ6fov0DkWaGCGd0ySWhtAHUVohuKyDxTtmM=; b=j3MkSlPhyEIH+jPw7xsELfRX1v6nsu8tSAeyYN0oK1K97+sJW8MLrQy/ RTJcrwItofFwNQRyiIP/ZgeH9w+iRTvw92OSm+Vwu8bB0GH7KAWw2c3SS ik3iFf2lhRPoBS4hv+/gcL/yKdlAxOTc4TsaCupkrsZ3fCBiIAsPWqzD7 NVZdZ3yPdmsTjN/Ta/hbcjoyOVl/UDRiNVmP91JXOL5zTk7L/6sqHEcuh KIQSsnEbLpZMF6qKBYDaYC9N5bJRwMwQIUxwQ3MJngPzFk9Ouvqdo90b5 +FvTkkf9YBp6nFadJoCj6RF2irANS9uKpeymt2QHdLcSNqgxNxL70As3i g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="339868371" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="339868371" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="1014401426" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="1014401426" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:02 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id EFD9310438D; Thu, 30 Mar 2023 14:49:59 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Mike Rapoport Subject: [PATCHv9 02/14] mm: Add support for unaccepted memory Date: Thu, 30 Mar 2023 14:49:44 +0300 Message-Id: <20230330114956.20342-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793524476850995?= X-GMAIL-MSGID: =?utf-8?q?1761793524476850995?= UEFI Specification version 2.9 introduces the concept of memory acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, require memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific to the Virtual Machine platform. There are several ways kernel can deal with unaccepted memory: 1. Accept all the memory during the boot. It is easy to implement and it doesn't have runtime cost once the system is booted. The downside is very long boot time. Accept can be parallelized to multiple CPUs to keep it manageable (i.e. via DEFERRED_STRUCT_PAGE_INIT), but it tends to saturate memory bandwidth and does not scale beyond the point. 2. Accept a block of memory on the first use. It requires more infrastructure and changes in page allocator to make it work, but it provides good boot time. On-demand memory accept means latency spikes every time kernel steps onto a new memory block. The spikes will go away once workload data set size gets stabilized or all memory gets accepted. 3. Accept all memory in background. Introduce a thread (or multiple) that gets memory accepted proactively. It will minimize time the system experience latency spikes on memory allocation while keeping low boot time. This approach cannot function on its own. It is an extension of #2: background memory acceptance requires functional scheduler, but the page allocator may need to tap into unaccepted memory before that. The downside of the approach is that these threads also steal CPU cycles and memory bandwidth from the user's workload and may hurt user experience. The patch implements #1 and #2 for now. #2 is the default. Some workloads may want to use #1 with accept_memory=eager in kernel command line. #3 can be implemented later based on user's demands. Support of unaccepted memory requires a few changes in core-mm code: - memblock has to accept memory on allocation; - page allocator has to accept memory on the first allocation of the page; Memblock change is trivial. The page allocator is modified to accept pages. New memory gets accepted before putting pages on free lists. It is done lazily: only accept new pages when we run out of already accepted memory. The memory gets accepted until the high watermark is reached. Architecture has to provide two helpers if it wants to support unaccepted memory: - accept_memory() makes a range of physical addresses accepted. - range_contains_unaccepted_memory() checks anything within the range of physical addresses requires acceptance. Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport # memblock Reviewed-by: Vlastimil Babka --- drivers/base/node.c | 7 ++ fs/proc/meminfo.c | 5 ++ include/linux/mmzone.h | 8 ++ mm/internal.h | 13 ++++ mm/memblock.c | 9 +++ mm/mm_init.c | 7 ++ mm/page_alloc.c | 161 +++++++++++++++++++++++++++++++++++++++++ mm/vmstat.c | 3 + 8 files changed, 213 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index b46db17124f3..655975946ef6 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -448,6 +448,9 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d ShmemPmdMapped: %8lu kB\n" "Node %d FileHugePages: %8lu kB\n" "Node %d FilePmdMapped: %8lu kB\n" +#endif +#ifdef CONFIG_UNACCEPTED_MEMORY + "Node %d Unaccepted: %8lu kB\n" #endif , nid, K(node_page_state(pgdat, NR_FILE_DIRTY)), @@ -477,6 +480,10 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), nid, K(node_page_state(pgdat, NR_FILE_THPS)), nid, K(node_page_state(pgdat, NR_FILE_PMDMAPPED)) +#endif +#ifdef CONFIG_UNACCEPTED_MEMORY + , + nid, K(sum_zone_node_page_state(nid, NR_UNACCEPTED)) #endif ); len += hugetlb_report_node_meminfo(buf, len, nid); diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index b43d0bd42762..8dca4d6d96c7 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -168,6 +168,11 @@ static int meminfo_proc_show(struct seq_file *m, void *v) global_zone_page_state(NR_FREE_CMA_PAGES)); #endif +#ifdef CONFIG_UNACCEPTED_MEMORY + show_val_kb(m, "Unaccepted: ", + global_zone_page_state(NR_UNACCEPTED)); +#endif + hugetlb_report_meminfo(m); arch_report_meminfo(m); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 72837e019bd1..c5f50ad19870 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -148,6 +148,9 @@ enum zone_stat_item { NR_ZSPAGES, /* allocated in zsmalloc */ #endif NR_FREE_CMA_PAGES, +#ifdef CONFIG_UNACCEPTED_MEMORY + NR_UNACCEPTED, +#endif NR_VM_ZONE_STAT_ITEMS }; enum node_stat_item { @@ -919,6 +922,11 @@ struct zone { /* free areas of different sizes */ struct free_area free_area[MAX_ORDER + 1]; +#ifdef CONFIG_UNACCEPTED_MEMORY + /* Pages to be accepted. All pages on the list are MAX_ORDER */ + struct list_head unaccepted_pages; +#endif + /* zone flags, see below */ unsigned long flags; diff --git a/mm/internal.h b/mm/internal.h index c05ad651b515..748bfeac1fea 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1114,4 +1114,17 @@ struct vma_prepare { struct vm_area_struct *remove; struct vm_area_struct *remove2; }; + +#ifndef CONFIG_UNACCEPTED_MEMORY +static inline bool range_contains_unaccepted_memory(phys_addr_t start, + phys_addr_t end) +{ + return false; +} + +static inline void accept_memory(phys_addr_t start, phys_addr_t end) +{ +} +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memblock.c b/mm/memblock.c index 7911224b1ed3..54f89d9ac98e 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1436,6 +1436,15 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, */ kmemleak_alloc_phys(found, size, 0); + /* + * Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, + * require memory to be accepted before it can be used by the + * guest. + * + * Accept the memory of the allocated buffer. + */ + accept_memory(found, found + size); + return found; } diff --git a/mm/mm_init.c b/mm/mm_init.c index dd3a6ed9663f..5e5afbefda1e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1373,6 +1373,10 @@ static void __meminit zone_init_free_lists(struct zone *zone) INIT_LIST_HEAD(&zone->free_area[order].free_list[t]); zone->free_area[order].nr_free = 0; } + +#ifdef CONFIG_UNACCEPTED_MEMORY + INIT_LIST_HEAD(&zone->unaccepted_pages); +#endif } void __meminit init_currently_empty_zone(struct zone *zone, @@ -1958,6 +1962,9 @@ static void __init deferred_free_range(unsigned long pfn, return; } + /* Accept chunks smaller than MAX_ORDER upfront */ + accept_memory(PFN_PHYS(pfn), PFN_PHYS(pfn + nr_pages)); + for (i = 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0767dd6bc5ba..d62fcb2f28bd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -387,6 +387,11 @@ EXPORT_SYMBOL(nr_node_ids); EXPORT_SYMBOL(nr_online_nodes); #endif +static bool page_contains_unaccepted(struct page *page, unsigned int order); +static void accept_page(struct page *page, unsigned int order); +static bool try_to_accept_memory(struct zone *zone, unsigned int order); +static bool __free_unaccepted(struct page *page); + int page_group_by_mobility_disabled __read_mostly; #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT @@ -1481,6 +1486,13 @@ void __free_pages_core(struct page *page, unsigned int order) atomic_long_add(nr_pages, &page_zone(page)->managed_pages); + if (page_contains_unaccepted(page, order)) { + if (order == MAX_ORDER && __free_unaccepted(page)) + return; + + accept_page(page, order); + } + /* * Bypass PCP and place fresh pages right to the tail, primarily * relevant for memory onlining. @@ -3150,6 +3162,9 @@ static inline long __zone_watermark_unusable_free(struct zone *z, if (!(alloc_flags & ALLOC_CMA)) unusable_free += zone_page_state(z, NR_FREE_CMA_PAGES); #endif +#ifdef CONFIG_UNACCEPTED_MEMORY + unusable_free += zone_page_state(z, NR_UNACCEPTED); +#endif return unusable_free; } @@ -3449,6 +3464,9 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, gfp_mask)) { int ret; + if (try_to_accept_memory(zone, order)) + goto try_this_zone; + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* * Watermark failed for this zone, but see if we can @@ -3501,6 +3519,9 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, return page; } else { + if (try_to_accept_memory(zone, order)) + goto try_this_zone; + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* Try again if zone has deferred pages */ if (deferred_pages_enabled()) { @@ -7184,3 +7205,143 @@ bool has_managed_dma(void) return false; } #endif /* CONFIG_ZONE_DMA */ + +#ifdef CONFIG_UNACCEPTED_MEMORY + +/* Counts number of zones with unaccepted pages. */ +static DEFINE_STATIC_KEY_FALSE(zones_with_unaccepted_pages); + +static bool lazy_accept = true; + +static int __init accept_memory_parse(char *p) +{ + if (!strcmp(p, "lazy")) { + lazy_accept = true; + return 0; + } else if (!strcmp(p, "eager")) { + lazy_accept = false; + return 0; + } else { + return -EINVAL; + } +} +early_param("accept_memory", accept_memory_parse); + +static bool page_contains_unaccepted(struct page *page, unsigned int order) +{ + phys_addr_t start = page_to_phys(page); + phys_addr_t end = start + (PAGE_SIZE << order); + + return range_contains_unaccepted_memory(start, end); +} + +static void accept_page(struct page *page, unsigned int order) +{ + phys_addr_t start = page_to_phys(page); + + accept_memory(start, start + (PAGE_SIZE << order)); +} + +static bool try_to_accept_memory_one(struct zone *zone) +{ + unsigned long flags; + struct page *page; + bool last; + + if (list_empty(&zone->unaccepted_pages)) + return false; + + spin_lock_irqsave(&zone->lock, flags); + page = list_first_entry_or_null(&zone->unaccepted_pages, + struct page, lru); + if (!page) { + spin_unlock_irqrestore(&zone->lock, flags); + return false; + } + + list_del(&page->lru); + last = list_empty(&zone->unaccepted_pages); + + __mod_zone_freepage_state(zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); + __mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES); + spin_unlock_irqrestore(&zone->lock, flags); + + accept_page(page, MAX_ORDER); + + __free_pages_ok(page, MAX_ORDER, FPI_TO_TAIL); + + if (last) + static_branch_dec(&zones_with_unaccepted_pages); + + return true; +} + +static bool try_to_accept_memory(struct zone *zone, unsigned int order) +{ + long to_accept; + int ret = false; + + if (!static_branch_unlikely(&zones_with_unaccepted_pages)) + return false; + + /* How much to accept to get to high watermark? */ + to_accept = high_wmark_pages(zone) - + (zone_page_state(zone, NR_FREE_PAGES) - + __zone_watermark_unusable_free(zone, order, 0)); + + /* Accept at least one page */ + do { + if (!try_to_accept_memory_one(zone)) + break; + ret = true; + to_accept -= MAX_ORDER_NR_PAGES; + } while (to_accept > 0); + + return ret; +} + +static bool __free_unaccepted(struct page *page) +{ + struct zone *zone = page_zone(page); + unsigned long flags; + bool first = false; + + if (!lazy_accept) + return false; + + spin_lock_irqsave(&zone->lock, flags); + first = list_empty(&zone->unaccepted_pages); + list_add_tail(&page->lru, &zone->unaccepted_pages); + __mod_zone_freepage_state(zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); + __mod_zone_page_state(zone, NR_UNACCEPTED, MAX_ORDER_NR_PAGES); + spin_unlock_irqrestore(&zone->lock, flags); + + if (first) + static_branch_inc(&zones_with_unaccepted_pages); + + return true; +} + +#else + +static bool page_contains_unaccepted(struct page *page, unsigned int order) +{ + return false; +} + +static void accept_page(struct page *page, unsigned int order) +{ +} + +static bool try_to_accept_memory(struct zone *zone, unsigned int order) +{ + return false; +} + +static bool __free_unaccepted(struct page *page) +{ + BUILD_BUG(); + return false; +} + +#endif /* CONFIG_UNACCEPTED_MEMORY */ diff --git a/mm/vmstat.c b/mm/vmstat.c index 0a6d742322db..16ec8b994ef3 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1256,6 +1256,9 @@ const char * const vmstat_text[] = { "nr_zspages", #endif "nr_free_cma", +#ifdef CONFIG_UNACCEPTED_MEMORY + "nr_unaccepted", +#endif /* enum numa_stat_item counters */ #ifdef CONFIG_NUMA From patchwork Thu Mar 30 11:49:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77153 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1067288vqo; Thu, 30 Mar 2023 04:57:36 -0700 (PDT) X-Google-Smtp-Source: AK7set/j6G9sgDdeCe+UWIp8fkErIYfIpea5krKyN6m0QHc/wRfaWFQp1M3AlefqVN3Pq49SK82b X-Received: by 2002:a05:6a20:c426:b0:d5:e640:15ec with SMTP id en38-20020a056a20c42600b000d5e64015ecmr19872169pzb.29.1680177455864; Thu, 30 Mar 2023 04:57:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177455; cv=none; d=google.com; s=arc-20160816; b=XY7uW5SWMig4Bv7kaj2ggXlO6uTr7uI9jtO0CvCMX1HtwSY+b9A0J+g3h+EaGXvC++ jgVEUeSOoroQw44Gj9kPoU+Kf8mUSdnLMkEOWsSsp7kWDL9a1W0XCMPayiXImcxQdadU TtCLR34jsBJt1VL0zHS8VpjKH/znH7qL5jvNWmG5Wk8CMi4YvUeehFz1ZSDFPQNa9bjN ZFwKv3MU4IqAVJPEK524qHiB/4+R3Dvo4BCvaLysxz2flUAobgzzu4NtuIerqUNhCE2A e91Cd0D6Q1rzq7jxGaq7f6o8kJv2sIJQkmEhd+tbISoq2q5mEh1r0Ca/785wCmE0A+JQ xgrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3Xcx87+8at3w01ECUa4i8O3uXEYYImpaB9iORG5znps=; b=lNQY/ch8zb7K/sYOocYtu2tjMOy7dqMnC1KOS0MtR6Mhf3Tdd/lGwhLuCfLwTxu2uF y5Y7o/iCF2rgBHuCTAwn7z+GQM2KpmOAH5rygtM9peeTGDhlPekKdWbeKypUtsEKBmCg oEAhVyKDM6YlapWd5jbUC1ObhMVRtBJuoi6NlW3zEzcVrjBYOt1shuBusQCI7vvjPV5U WBKpD8BHRFbH2C6FQS1ntUcwchkURdRiRGIgkO+e7A3zyh32MyK0MA9uubyMi3SrWX/q 7eEEmdkoeE0PC+ePubpgbLAMqYuBXbKJ1EtTLVlkwEn9/Tywwomm2PEa4pv2IrrN2s/z 62Nw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jJl+1eQB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l192-20020a6391c9000000b004fb7e7d565asi2938414pge.651.2023.03.30.04.57.22; Thu, 30 Mar 2023 04:57:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jJl+1eQB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231585AbjC3LvD (ORCPT + 99 others); Thu, 30 Mar 2023 07:51:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231727AbjC3Luk (ORCPT ); Thu, 30 Mar 2023 07:50:40 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB87F8A5B; Thu, 30 Mar 2023 04:50:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177031; x=1711713031; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=j9rCoZtseVkMxXJK20bhOG6vjoCUVN6BwYUe9lsjGyo=; b=jJl+1eQBQQejUCWSPKVgpFg5Jv+fiP9ujTka/dwPCrfhnD91l75vC2V8 lYfxcAbCGvzB38MWhmthHwv9kTLOW4E3jqrYUwkMOCobSZNZOMLBvGbUC C3PTnNzbh+zxpI2FkevBtvAZUwgHoD/jnoo6Lh6nSqBh3LRU8O/TMQOsB KZB0g/YoOI8GMZMBrY7j32QTX13PuynkQPl7ZFTHRi7nY+5TnkWePUzL7 gTeQ8IytE/wF2ZgtFjGm80AIpuo00av31vjZ84psxZaeqNmgAfpfPny0g tCMn6OoqodBOJ6BwqxAWgOkaHisvCpK3HJVZb4hVuzhooVm0I+WHHYzYH Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="342756723" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="342756723" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="634856482" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="634856482" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:02 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 0612B10438E; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv9 03/14] mm/page_alloc: Fake unaccepted memory Date: Thu, 30 Mar 2023 14:49:45 +0300 Message-Id: <20230330114956.20342-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793756006982526?= X-GMAIL-MSGID: =?utf-8?q?1761793756006982526?= For testing purposes, it is useful to fake unaccepted memory in the system. It helps to understand unaccepted memory overhead to the page allocator. The patch allows to treat memory above the specified physical memory address as unaccepted. The change only fakes unaccepted memory for page allocator. Memblock is not affected. It also assumes that arch-provided accept_memory() on already accepted memory is a nop. Signed-off-by: Kirill A. Shutemov --- mm/page_alloc.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d62fcb2f28bd..509a93b7e5af 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7213,6 +7213,8 @@ static DEFINE_STATIC_KEY_FALSE(zones_with_unaccepted_pages); static bool lazy_accept = true; +static unsigned long fake_unaccepted_start = -1UL; + static int __init accept_memory_parse(char *p) { if (!strcmp(p, "lazy")) { @@ -7227,11 +7229,30 @@ static int __init accept_memory_parse(char *p) } early_param("accept_memory", accept_memory_parse); +static int __init fake_unaccepted_start_parse(char *p) +{ + if (!p) + return -EINVAL; + + fake_unaccepted_start = memparse(p, &p); + + if (*p != '\0') { + fake_unaccepted_start = -1UL; + return -EINVAL; + } + + return 0; +} +early_param("fake_unaccepted_start", fake_unaccepted_start_parse); + static bool page_contains_unaccepted(struct page *page, unsigned int order) { phys_addr_t start = page_to_phys(page); phys_addr_t end = start + (PAGE_SIZE << order); + if (start >= fake_unaccepted_start) + return true; + return range_contains_unaccepted_memory(start, end); } From patchwork Thu Mar 30 11:49:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77147 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1065313vqo; Thu, 30 Mar 2023 04:53:18 -0700 (PDT) X-Google-Smtp-Source: AKy350bai5v7dxqh0vSMQqksRLmIcm58R2hxoOtiCo9FKoVb4gfqrX4YDTey4Tyemwp7yxP0NhMn X-Received: by 2002:aa7:9728:0:b0:626:17b8:8586 with SMTP id k8-20020aa79728000000b0062617b88586mr22501542pfg.30.1680177198104; Thu, 30 Mar 2023 04:53:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177198; cv=none; d=google.com; s=arc-20160816; b=hsWBcefh0z0Fn6VgCNQjqIvyq3Fxk6DUssEvPxRRu/RQYcu8DaBuLbexH3F3HCDNSA 5lu8aOGu9PIdctbTy0Wy5mLi4H9AzbmOHQO+AVRNp4cGfT0HzkpgpXSZJL298BDWXE6K 1ns+FeHprCrejYZew7oy61uP7VHPnrUT3yW/DaT4XV1lEqMUKUo0B+KQ5wOk27qJNURy fcgG9z3AYKus5Fj1BLr0VK9oEglZ1CUzkRJNUy50TEYtXsUo7SinyF1eJ4mW9/xTzee/ poM0a+p+vHEFIY1h8vPEpYMhVo/OL24KX4AmPwL7btiV0jM9bhcr51196kzBjP092zo3 N0bQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mlANSXqkRIS+uafV6MvTmd7E01TJ70/P1xceNx67Pmo=; b=kEjCQ2uGv6CCw7lJD8LHA7LG3LoNecd6nq/RhMyp5Y5zl0kp5Oi2m0VYzerqcho4Fb NbmV9pmku9Mxtjw+CKnh79cw8rqNY5XOuYUgeThMYT847fmb4nQZZSHIHSNwgd35mG6H 6R0go7N/Jm059jab/tXxZWDlv1Qae1oZQSDrw7fkNxP5jsDWu4K9e0jij7x1pGzuzAk1 kjYlAVckstrfOmn6RWhlV3hC28+jY1B8TVwpFJv6NAhKC15bDQmdt81kAqXtfKkGhHZQ bOqTC3yN16TspiDrhVx9EPbBdy1THZDl0tRT6hHSpAJgxhf4meEz8b6c3dIY6FVOjACu 7NVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AzJPoGXX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h186-20020a636cc3000000b0051353d86539si9438439pgc.687.2023.03.30.04.53.05; Thu, 30 Mar 2023 04:53:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AzJPoGXX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231614AbjC3Lum (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231565AbjC3Lu2 (ORCPT ); Thu, 30 Mar 2023 07:50:28 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACB3155AF; Thu, 30 Mar 2023 04:50:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177026; x=1711713026; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4/nsBozr9Yxm1Svh3lW2TBRBUO1KN1TDzMtK6uriOCA=; b=AzJPoGXX2ZepZVsKvtRD644FyKJ8pUBPSBlhjelq4SlrFY0AupvJIpdK U/qUBk6INCSzYt4LvSobyzVsQ8MFz5M39rODPHFMPyDdTLTI6geQbMbIs GGLZL3w24VzjHCnofqLkFsL1YP0dHuwNCHv0LQwizSrBzNyy1n3sY55Mr UpYVa7ILqOzgUoXRIHmImpkUIKRVDphyvT7GrlvQ3wDrI2CBwq+ZQz/8n NaAkCYCBN97wFv7FXKVdiidqwCmNIvkq9T7K++s41MB9PWOzhRSJ4uWLa VzcDrE603zCiM/DStDeEu7cQTTOsZ2sf/NKIirkWJwhHZ3n5yn12zdmct g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="342756691" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="342756691" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="634856368" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="634856368" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:02 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 105B310438F; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv9 04/14] mm/page_alloc: Add sysfs handle to accept accept_memory Date: Thu, 30 Mar 2023 14:49:46 +0300 Message-Id: <20230330114956.20342-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793485832534572?= X-GMAIL-MSGID: =?utf-8?q?1761793485832534572?= Write amount of memory to accept into the new sysfs handle /sys/kernel/mm/page_alloc/accept_memory. Write 'all' to the handle to accept all memory in the system. It can be used to implement background memory accepting from userspace. It is also useful for debugging. Signed-off-by: Kirill A. Shutemov --- mm/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 509a93b7e5af..07e16e9b49c4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7343,6 +7343,45 @@ static bool __free_unaccepted(struct page *page) return true; } +static ssize_t accept_memory_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + unsigned long to_accept = 0; + struct zone *zone; + char *retptr; + + if (sysfs_streq(buf, "all")) { + to_accept = ULONG_MAX; + } else { + to_accept = memparse(buf, &retptr); + + /* Get rid of trailing whitespace, including '\n' */ + retptr = skip_spaces(retptr); + + if (*retptr != 0 || to_accept == 0) + return -EINVAL; + } + + for_each_populated_zone(zone) { + while (try_to_accept_memory_one(zone)) { + if (to_accept <= PAGE_SIZE << MAX_ORDER) + return count; + + to_accept -= PAGE_SIZE << MAX_ORDER; + } + } + + return count; +} + +static struct kobj_attribute accept_memory_attr = __ATTR_WO(accept_memory); + +static struct attribute *page_alloc_attr[] = { + &accept_memory_attr.attr, + NULL +}; + #else static bool page_contains_unaccepted(struct page *page, unsigned int order) @@ -7366,3 +7405,28 @@ static bool __free_unaccepted(struct page *page) } #endif /* CONFIG_UNACCEPTED_MEMORY */ + +static const struct attribute_group page_alloc_attr_group = { +#ifdef CONFIG_UNACCEPTED_MEMORY + .attrs = page_alloc_attr, +#endif +}; + +static int __init page_alloc_init_sysfs(void) +{ + struct kobject *page_alloc_kobj; + int err; + + page_alloc_kobj = kobject_create_and_add("page_alloc", mm_kobj); + if (!page_alloc_kobj) + return -ENOMEM; + + err = sysfs_create_group(page_alloc_kobj, &page_alloc_attr_group); + if (err) { + kobject_put(page_alloc_kobj); + return err; + } + + return 0; +} +late_initcall(page_alloc_init_sysfs); From patchwork Thu Mar 30 11:49:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77155 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1067466vqo; Thu, 30 Mar 2023 04:58:01 -0700 (PDT) X-Google-Smtp-Source: AKy350bEiRsrm+aEuBYSBip4JR78jWnEsqwkBF2C0jHisE+mb4ag7CxiUdCTmyJDpnHOp2WREIQj X-Received: by 2002:a17:902:e549:b0:1a1:241a:9bd0 with SMTP id n9-20020a170902e54900b001a1241a9bd0mr6595861plf.5.1680177481461; Thu, 30 Mar 2023 04:58:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177481; cv=none; d=google.com; s=arc-20160816; b=kYcbxS92ulTYXTJTaCsJ1O4HmF3UyPtjx+M0KrLU+ktAoneT3u/qx+dByPh7kIir6z 7hBON64qBbvJKWqYRmXFBZfSZ2gQHrAujKwaqUBXpOaWcCmNEsVWeN25MVhUI2MwpZoc N8hS6j2a8sGDXQErIuZf1ckdOgsqC9acGvbt/C2OfBGG8R1jozrtvvv4v7vXyQ67nRWP k7YLkiuOrOYC+xgaGbmBPhhgBrtSFoSiErVBBjXfVMbLJUsNBMH8N6Z5bInS2yAsS8LW Q9mpIh1KrNPb9HCsOqAW3GsbrHHXXDMXoWEZGfhO2J1kMS8gmnFNrQZZW1x4/oFA1L9L SfBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ruFNRYlkeA7U/LySXAwK77bGHsdZfakSttMzy8rXOc4=; b=LTbvB6/CGOdidj35id70LZ9NowexCny1/62uIV8rzncCDW69Kxbv1pielSz3gLydcX mE1v6aOZR1k/YNlPekuGAlAE79qupcmM/BVtOKliFD6dGvnlo0wM8zYgUWwKvGb+29/I Et/LXY3mOVDu+TP3vOlrFRXqEMDF2hArU+mCpvYXiBdY7JxnAzXL9qWsnGTCmZlW5s1E S2YRYqQCq1tzhJFPMIIGo9bTxzemWEGVEeqcGvCooE6i45WNJeXAl3mCW/lfqzK9KeuL 6VizeigB7Ydbl6qL13mCiOwmMxm07wG15a31y7Cqz9V3cOkLp6dCP/zgWGZvur6Lb53n ZNDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Uv/jmmuo"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d18-20020a170902ced200b001968c61066esi31890043plg.493.2023.03.30.04.57.48; Thu, 30 Mar 2023 04:58:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Uv/jmmuo"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231681AbjC3Lvr (ORCPT + 99 others); Thu, 30 Mar 2023 07:51:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231389AbjC3LvZ (ORCPT ); Thu, 30 Mar 2023 07:51:25 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18281B752; Thu, 30 Mar 2023 04:50:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177052; x=1711713052; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hKxUht4bI1oBBOH1u/ajoAci1RdtaLogFa2+swkKZCg=; b=Uv/jmmuoLoYKJ8UjANEx2N6XOOeAhaSg5B+qklgkL2E/7Qwego/4EHXG Una+hqXS/Ti1zO7L9o2vS+t/BM04oaj+RRFM/jd/f6FEnZ/LUcHd72eVz x1sNfCaUMiajo8ilKx1C/zoLfXvXYQVJ96pyDqLiv8A2bh/rgHSPywmIG +pCPR4EvciEbP9AAUNl7y/o/TMkYgl4jgElflClq/sctJQukEeqMp3Epc Ua4TnaaMXrVWolFHoIQiGTuU9mXZltqfFfJ+sqfN60jjz1B6Sk6TbqxHA WjHix9i0vxBWKqu/vwYo27XkDSznCcqYNbLTJiaJeR1e1aMyK3mR6Z2Xz g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="342756785" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="342756785" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="634856509" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="634856509" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:17 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 1B89A104390; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Borislav Petkov Subject: [PATCHv9 05/14] efi/x86: Get full memory map in allocate_e820() Date: Thu, 30 Mar 2023 14:49:47 +0300 Message-Id: <20230330114956.20342-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793782662126685?= X-GMAIL-MSGID: =?utf-8?q?1761793782662126685?= Currently allocate_e820() is only interested in the size of map and size of memory descriptor to determine how many e820 entries the kernel needs. UEFI Specification version 2.9 introduces a new memory type -- unaccepted memory. To track unaccepted memory kernel needs to allocate a bitmap. The size of the bitmap is dependent on the maximum physical address present in the system. A full memory map is required to find the maximum address. Modify allocate_e820() to get a full memory map. Signed-off-by: Kirill A. Shutemov Reviewed-by: Borislav Petkov --- drivers/firmware/efi/libstub/x86-stub.c | 26 +++++++++++-------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index a0bfd31358ba..fff81843169c 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -681,28 +681,24 @@ static efi_status_t allocate_e820(struct boot_params *params, struct setup_data **e820ext, u32 *e820ext_size) { - unsigned long map_size, desc_size, map_key; + struct efi_boot_memmap *map; efi_status_t status; - __u32 nr_desc, desc_version; + __u32 nr_desc; - /* Only need the size of the mem map and size of each mem descriptor */ - map_size = 0; - status = efi_bs_call(get_memory_map, &map_size, NULL, &map_key, - &desc_size, &desc_version); - if (status != EFI_BUFFER_TOO_SMALL) - return (status != EFI_SUCCESS) ? status : EFI_UNSUPPORTED; - - nr_desc = map_size / desc_size + EFI_MMAP_NR_SLACK_SLOTS; + status = efi_get_memory_map(&map, false); + if (status != EFI_SUCCESS) + return status; - if (nr_desc > ARRAY_SIZE(params->e820_table)) { - u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table); + nr_desc = map->map_size / map->desc_size; + if (nr_desc > ARRAY_SIZE(params->e820_table) - EFI_MMAP_NR_SLACK_SLOTS) { + u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table) + + EFI_MMAP_NR_SLACK_SLOTS; status = alloc_e820ext(nr_e820ext, e820ext, e820ext_size); - if (status != EFI_SUCCESS) - return status; } - return EFI_SUCCESS; + efi_bs_call(free_pool, map); + return status; } struct exit_boot_struct { From patchwork Thu Mar 30 11:49:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77152 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1067178vqo; Thu, 30 Mar 2023 04:57:20 -0700 (PDT) X-Google-Smtp-Source: AKy350botD7agVMBd+nk4xnGv737XGK/AqSkqc86giGxXpOUtudCzwmd6OS+VEJnQaKStTkaCwqo X-Received: by 2002:a17:90b:17c9:b0:237:9cc7:28a6 with SMTP id me9-20020a17090b17c900b002379cc728a6mr25198490pjb.26.1680177440060; Thu, 30 Mar 2023 04:57:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177440; cv=none; d=google.com; s=arc-20160816; b=dcDL6leFGhLrWgex1RTgo1Ii6Rl1uOjS68sKld/NcxokPvWTrLlZ/yhEg/j9jHnX3n B8pa0vS66ZWf+IFK+Z1yQy+asRi6wtSbk3Bf2wWy7XIOOPIlDisapU4Bh3pTDXXjVhWv LhqXqegXXp6yqEz3/zh7wkwus+hDehFL80pBrqOdTeqiYzxW86vwuvqjny549i7Nm48U /kNbpEDuIY/RIs61By+pLI8MPd0nLc1o/Xla0AS2zI9jIXFOG+auyC8BB8P6aM75lHuh DwXAJEBILSyGKxr9AbQSyK6rr/xuBQohs9qI3fVYV6yWpPryI2jlutt7p7NFfnilYiPQ MCNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NsUoVw2/Wz8ct8GynZYL3aQMK/00A4pcE3JltZ1Es1c=; b=PVDZDxQ7iwFkifeiSYTD2QBjyYczCvHoSWzDWJYedJViw7iXCb6rCpxTHu89xM4O3V pZcguFEi4wFNJ4jVX1zC1knD4sBmYvp26IGe8l+kyJ4vyHhzXlxISnGpoaEn/pnZkdFN TZ+BPes5iWWExLIk3QOQMhup+BrHDWoAmdW3S6JIiqL52pfqo2HbvdSQ06SeuW3x5IA2 YKKprf1zvU2uQXIj+nOZr103IVXAfdT4SoQ5kzax+24TA+cNf5h8rSXFV99YHKvSmLRx PGiEfETkiT2gJgYLaoLF6RsYy566mKuWt76dCVUhQI9igmQWrctBhdcwi4ChKe6Pgayp p4qQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SSOFEuLW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c10-20020a170902d48a00b001a216fddd01si15736518plg.647.2023.03.30.04.57.06; Thu, 30 Mar 2023 04:57:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SSOFEuLW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231853AbjC3LvJ (ORCPT + 99 others); Thu, 30 Mar 2023 07:51:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231847AbjC3Luu (ORCPT ); Thu, 30 Mar 2023 07:50:50 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C39BCA24B; Thu, 30 Mar 2023 04:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177034; x=1711713034; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SDxzXgjL7pEXr+tV7TRZsiH2QE2GbbDPicGBqQxLN7g=; b=SSOFEuLWoEuUnaNXq3+ageuv/MA3Nqe+vq0wk6gFlzjokhfZVRxhXc5i 1pVA4pmg5uxThDFYSr4FCEIkLVIzQu68zdyp6Xox6GF39ar+t1i3tPEjW h8mplSkv3yj31ID9SYzBarEdJt2HGDFzddWdx43+7Nyc/NigkrZKGiX6o 0WrTb1MS/PREx+FWZFN8WhJC5ulBP+GZ6Wyp+5U3avrntlya2Ag/jB2v6 Ff/5ZpJ5e1fIDupPq6uRF/wHEcHqZK3rx7ITKAqesg/Cop9K4meT1l7RX 52wblpMTV7hQrWksKfaG1Pj81n+eRGlcCklxwTY7ljdAFl3jtdi5S954u g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="342756758" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="342756758" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="634856505" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="634856505" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:16 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 276A3104391; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv9 06/14] x86/boot: Add infrastructure required for unaccepted memory support Date: Thu, 30 Mar 2023 14:49:48 +0300 Message-Id: <20230330114956.20342-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793739179322734?= X-GMAIL-MSGID: =?utf-8?q?1761793739179322734?= Pull functionality from the main kernel headers and lib/ that is required for unaccepted memory support. This is preparatory patch. The users for the functionality will come in following patches. Signed-off-by: Kirill A. Shutemov Reviewed-by: Borislav Petkov (AMD) --- arch/x86/boot/bitops.h | 40 ++++++++++++ arch/x86/boot/compressed/align.h | 14 +++++ arch/x86/boot/compressed/bitmap.c | 43 +++++++++++++ arch/x86/boot/compressed/bitmap.h | 49 +++++++++++++++ arch/x86/boot/compressed/bits.h | 36 +++++++++++ arch/x86/boot/compressed/find.c | 54 ++++++++++++++++ arch/x86/boot/compressed/find.h | 79 ++++++++++++++++++++++++ arch/x86/boot/compressed/math.h | 37 +++++++++++ arch/x86/boot/compressed/minmax.h | 61 ++++++++++++++++++ arch/x86/boot/compressed/pgtable_types.h | 25 ++++++++ 10 files changed, 438 insertions(+) create mode 100644 arch/x86/boot/compressed/align.h create mode 100644 arch/x86/boot/compressed/bitmap.c create mode 100644 arch/x86/boot/compressed/bitmap.h create mode 100644 arch/x86/boot/compressed/bits.h create mode 100644 arch/x86/boot/compressed/find.c create mode 100644 arch/x86/boot/compressed/find.h create mode 100644 arch/x86/boot/compressed/math.h create mode 100644 arch/x86/boot/compressed/minmax.h create mode 100644 arch/x86/boot/compressed/pgtable_types.h diff --git a/arch/x86/boot/bitops.h b/arch/x86/boot/bitops.h index 8518ae214c9b..38badf028543 100644 --- a/arch/x86/boot/bitops.h +++ b/arch/x86/boot/bitops.h @@ -41,4 +41,44 @@ static inline void set_bit(int nr, void *addr) asm("btsl %1,%0" : "+m" (*(u32 *)addr) : "Ir" (nr)); } +static __always_inline void __set_bit(long nr, volatile unsigned long *addr) +{ + asm volatile(__ASM_SIZE(bts) " %1,%0" : : "m" (*(volatile long *) addr), + "Ir" (nr) : "memory"); +} + +static __always_inline void __clear_bit(long nr, volatile unsigned long *addr) +{ + asm volatile(__ASM_SIZE(btr) " %1,%0" : : "m" (*(volatile long *) addr), + "Ir" (nr) : "memory"); +} + +/** + * __ffs - find first set bit in word + * @word: The word to search + * + * Undefined if no bit exists, so code should check against 0 first. + */ +static __always_inline unsigned long __ffs(unsigned long word) +{ + asm("rep; bsf %1,%0" + : "=r" (word) + : "rm" (word)); + return word; +} + +/** + * ffz - find first zero bit in word + * @word: The word to search + * + * Undefined if no zero exists, so code should check against ~0UL first. + */ +static __always_inline unsigned long ffz(unsigned long word) +{ + asm("rep; bsf %1,%0" + : "=r" (word) + : "r" (~word)); + return word; +} + #endif /* BOOT_BITOPS_H */ diff --git a/arch/x86/boot/compressed/align.h b/arch/x86/boot/compressed/align.h new file mode 100644 index 000000000000..7ccabbc5d1b8 --- /dev/null +++ b/arch/x86/boot/compressed/align.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef BOOT_ALIGN_H +#define BOOT_ALIGN_H +#define _LINUX_ALIGN_H /* Inhibit inclusion of */ + +/* @a is a power of 2 value */ +#define ALIGN(x, a) __ALIGN_KERNEL((x), (a)) +#define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a)) +#define __ALIGN_MASK(x, mask) __ALIGN_KERNEL_MASK((x), (mask)) +#define PTR_ALIGN(p, a) ((typeof(p))ALIGN((unsigned long)(p), (a))) +#define PTR_ALIGN_DOWN(p, a) ((typeof(p))ALIGN_DOWN((unsigned long)(p), (a))) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#endif diff --git a/arch/x86/boot/compressed/bitmap.c b/arch/x86/boot/compressed/bitmap.c new file mode 100644 index 000000000000..789ecadeb521 --- /dev/null +++ b/arch/x86/boot/compressed/bitmap.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "bitmap.h" + +void __bitmap_set(unsigned long *map, unsigned int start, int len) +{ + unsigned long *p = map + BIT_WORD(start); + const unsigned int size = start + len; + int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); + unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); + + while (len - bits_to_set >= 0) { + *p |= mask_to_set; + len -= bits_to_set; + bits_to_set = BITS_PER_LONG; + mask_to_set = ~0UL; + p++; + } + if (len) { + mask_to_set &= BITMAP_LAST_WORD_MASK(size); + *p |= mask_to_set; + } +} + +void __bitmap_clear(unsigned long *map, unsigned int start, int len) +{ + unsigned long *p = map + BIT_WORD(start); + const unsigned int size = start + len; + int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); + unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); + + while (len - bits_to_clear >= 0) { + *p &= ~mask_to_clear; + len -= bits_to_clear; + bits_to_clear = BITS_PER_LONG; + mask_to_clear = ~0UL; + p++; + } + if (len) { + mask_to_clear &= BITMAP_LAST_WORD_MASK(size); + *p &= ~mask_to_clear; + } +} diff --git a/arch/x86/boot/compressed/bitmap.h b/arch/x86/boot/compressed/bitmap.h new file mode 100644 index 000000000000..35357f5feda2 --- /dev/null +++ b/arch/x86/boot/compressed/bitmap.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef BOOT_BITMAP_H +#define BOOT_BITMAP_H +#define __LINUX_BITMAP_H /* Inhibit inclusion of */ + +#include "../bitops.h" +#include "../string.h" +#include "align.h" + +#define BITMAP_MEM_ALIGNMENT 8 +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) + +#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1))) +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1))) + +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) + +void __bitmap_set(unsigned long *map, unsigned int start, int len); +void __bitmap_clear(unsigned long *map, unsigned int start, int len); + +static __always_inline void bitmap_set(unsigned long *map, unsigned int start, + unsigned int nbits) +{ + if (__builtin_constant_p(nbits) && nbits == 1) + __set_bit(start, map); + else if (__builtin_constant_p(start & BITMAP_MEM_MASK) && + IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) && + __builtin_constant_p(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + memset((char *)map + start / 8, 0xff, nbits / 8); + else + __bitmap_set(map, start, nbits); +} + +static __always_inline void bitmap_clear(unsigned long *map, unsigned int start, + unsigned int nbits) +{ + if (__builtin_constant_p(nbits) && nbits == 1) + __clear_bit(start, map); + else if (__builtin_constant_p(start & BITMAP_MEM_MASK) && + IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) && + __builtin_constant_p(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + memset((char *)map + start / 8, 0, nbits / 8); + else + __bitmap_clear(map, start, nbits); +} + +#endif diff --git a/arch/x86/boot/compressed/bits.h b/arch/x86/boot/compressed/bits.h new file mode 100644 index 000000000000..b0ffa007ee19 --- /dev/null +++ b/arch/x86/boot/compressed/bits.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef BOOT_BITS_H +#define BOOT_BITS_H +#define __LINUX_BITS_H /* Inhibit inclusion of */ + +#ifdef __ASSEMBLY__ +#define _AC(X,Y) X +#define _AT(T,X) X +#else +#define __AC(X,Y) (X##Y) +#define _AC(X,Y) __AC(X,Y) +#define _AT(T,X) ((T)(X)) +#endif + +#define _UL(x) (_AC(x, UL)) +#define _ULL(x) (_AC(x, ULL)) +#define UL(x) (_UL(x)) +#define ULL(x) (_ULL(x)) + +#define BIT(nr) (UL(1) << (nr)) +#define BIT_ULL(nr) (ULL(1) << (nr)) +#define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG)) +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) +#define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG)) +#define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG) +#define BITS_PER_BYTE 8 + +#define GENMASK(h, l) \ + (((~UL(0)) - (UL(1) << (l)) + 1) & \ + (~UL(0) >> (BITS_PER_LONG - 1 - (h)))) + +#define GENMASK_ULL(h, l) \ + (((~ULL(0)) - (ULL(1) << (l)) + 1) & \ + (~ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) + +#endif diff --git a/arch/x86/boot/compressed/find.c b/arch/x86/boot/compressed/find.c new file mode 100644 index 000000000000..b97a9e7c8085 --- /dev/null +++ b/arch/x86/boot/compressed/find.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include "bitmap.h" +#include "find.h" +#include "math.h" +#include "minmax.h" + +static __always_inline unsigned long swab(const unsigned long y) +{ +#if __BITS_PER_LONG == 64 + return __builtin_bswap32(y); +#else /* __BITS_PER_LONG == 32 */ + return __builtin_bswap64(y); +#endif +} + +unsigned long _find_next_bit(const unsigned long *addr1, + const unsigned long *addr2, unsigned long nbits, + unsigned long start, unsigned long invert, unsigned long le) +{ + unsigned long tmp, mask; + + if (start >= nbits) + return nbits; + + tmp = addr1[start / BITS_PER_LONG]; + if (addr2) + tmp &= addr2[start / BITS_PER_LONG]; + tmp ^= invert; + + /* Handle 1st word. */ + mask = BITMAP_FIRST_WORD_MASK(start); + if (le) + mask = swab(mask); + + tmp &= mask; + + start = round_down(start, BITS_PER_LONG); + + while (!tmp) { + start += BITS_PER_LONG; + if (start >= nbits) + return nbits; + + tmp = addr1[start / BITS_PER_LONG]; + if (addr2) + tmp &= addr2[start / BITS_PER_LONG]; + tmp ^= invert; + } + + if (le) + tmp = swab(tmp); + + return min(start + __ffs(tmp), nbits); +} diff --git a/arch/x86/boot/compressed/find.h b/arch/x86/boot/compressed/find.h new file mode 100644 index 000000000000..903574b9d57a --- /dev/null +++ b/arch/x86/boot/compressed/find.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef BOOT_FIND_H +#define BOOT_FIND_H +#define __LINUX_FIND_H /* Inhibit inclusion of */ + +#include "../bitops.h" +#include "align.h" +#include "bits.h" + +unsigned long _find_next_bit(const unsigned long *addr1, + const unsigned long *addr2, unsigned long nbits, + unsigned long start, unsigned long invert, unsigned long le); + +/** + * find_next_bit - find the next set bit in a memory region + * @addr: The address to base the search on + * @offset: The bitnumber to start searching at + * @size: The bitmap size in bits + * + * Returns the bit number for the next set bit + * If no bits are set, returns @size. + */ +static inline +unsigned long find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + if (small_const_nbits(size)) { + unsigned long val; + + if (offset >= size) + return size; + + val = *addr & GENMASK(size - 1, offset); + return val ? __ffs(val) : size; + } + + return _find_next_bit(addr, NULL, size, offset, 0UL, 0); +} + +/** + * find_next_zero_bit - find the next cleared bit in a memory region + * @addr: The address to base the search on + * @offset: The bitnumber to start searching at + * @size: The bitmap size in bits + * + * Returns the bit number of the next zero bit + * If no bits are zero, returns @size. + */ +static inline +unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + if (small_const_nbits(size)) { + unsigned long val; + + if (offset >= size) + return size; + + val = *addr | ~GENMASK(size - 1, offset); + return val == ~0UL ? size : ffz(val); + } + + return _find_next_bit(addr, NULL, size, offset, ~0UL, 0); +} + +/** + * for_each_set_bitrange_from - iterate over all set bit ranges [b; e) + * @b: bit offset of start of current bitrange (first set bit); must be initialized + * @e: bit offset of end of current bitrange (first unset bit) + * @addr: bitmap address to base the search on + * @size: bitmap size in number of bits + */ +#define for_each_set_bitrange_from(b, e, addr, size) \ + for ((b) = find_next_bit((addr), (size), (b)), \ + (e) = find_next_zero_bit((addr), (size), (b) + 1); \ + (b) < (size); \ + (b) = find_next_bit((addr), (size), (e) + 1), \ + (e) = find_next_zero_bit((addr), (size), (b) + 1)) +#endif diff --git a/arch/x86/boot/compressed/math.h b/arch/x86/boot/compressed/math.h new file mode 100644 index 000000000000..f7eede84bbc2 --- /dev/null +++ b/arch/x86/boot/compressed/math.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef BOOT_MATH_H +#define BOOT_MATH_H +#define __LINUX_MATH_H /* Inhibit inclusion of */ + +/* + * + * This looks more complex than it should be. But we need to + * get the type for the ~ right in round_down (it needs to be + * as wide as the result!), and we want to evaluate the macro + * arguments just once each. + */ +#define __round_mask(x, y) ((__typeof__(x))((y)-1)) + +/** + * round_up - round up to next specified power of 2 + * @x: the value to round + * @y: multiple to round up to (must be a power of 2) + * + * Rounds @x up to next multiple of @y (which must be a power of 2). + * To perform arbitrary rounding up, use roundup() below. + */ +#define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1) + +/** + * round_down - round down to next specified power of 2 + * @x: the value to round + * @y: multiple to round down to (must be a power of 2) + * + * Rounds @x down to next multiple of @y (which must be a power of 2). + * To perform arbitrary rounding down, use rounddown() below. + */ +#define round_down(x, y) ((x) & ~__round_mask(x, y)) + +#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) + +#endif diff --git a/arch/x86/boot/compressed/minmax.h b/arch/x86/boot/compressed/minmax.h new file mode 100644 index 000000000000..4efd05673260 --- /dev/null +++ b/arch/x86/boot/compressed/minmax.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef BOOT_MINMAX_H +#define BOOT_MINMAX_H +#define __LINUX_MINMAX_H /* Inhibit inclusion of */ + +/* + * This returns a constant expression while determining if an argument is + * a constant expression, most importantly without evaluating the argument. + * Glory to Martin Uecker + */ +#define __is_constexpr(x) \ + (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8))) + +/* + * min()/max()/clamp() macros must accomplish three things: + * + * - avoid multiple evaluations of the arguments (so side-effects like + * "x++" happen only once) when non-constant. + * - perform strict type-checking (to generate warnings instead of + * nasty runtime surprises). See the "unnecessary" pointer comparison + * in __typecheck(). + * - retain result as a constant expressions when called with only + * constant expressions (to avoid tripping VLA warnings in stack + * allocation usage). + */ +#define __typecheck(x, y) \ + (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) + +#define __no_side_effects(x, y) \ + (__is_constexpr(x) && __is_constexpr(y)) + +#define __safe_cmp(x, y) \ + (__typecheck(x, y) && __no_side_effects(x, y)) + +#define __cmp(x, y, op) ((x) op (y) ? (x) : (y)) + +#define __cmp_once(x, y, unique_x, unique_y, op) ({ \ + typeof(x) unique_x = (x); \ + typeof(y) unique_y = (y); \ + __cmp(unique_x, unique_y, op); }) + +#define __careful_cmp(x, y, op) \ + __builtin_choose_expr(__safe_cmp(x, y), \ + __cmp(x, y, op), \ + __cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op)) + +/** + * min - return minimum of two values of the same or compatible types + * @x: first value + * @y: second value + */ +#define min(x, y) __careful_cmp(x, y, <) + +/** + * max - return maximum of two values of the same or compatible types + * @x: first value + * @y: second value + */ +#define max(x, y) __careful_cmp(x, y, >) + +#endif diff --git a/arch/x86/boot/compressed/pgtable_types.h b/arch/x86/boot/compressed/pgtable_types.h new file mode 100644 index 000000000000..8f1d87a69efc --- /dev/null +++ b/arch/x86/boot/compressed/pgtable_types.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef BOOT_COMPRESSED_PGTABLE_TYPES_H +#define BOOT_COMPRESSED_PGTABLE_TYPES_H +#define _ASM_X86_PGTABLE_DEFS_H /* Inhibit inclusion of */ + +#define PAGE_SHIFT 12 + +#ifdef CONFIG_X86_64 +#define PTE_SHIFT 9 +#elif defined CONFIG_X86_PAE +#define PTE_SHIFT 9 +#else /* 2-level */ +#define PTE_SHIFT 10 +#endif + +enum pg_level { + PG_LEVEL_NONE, + PG_LEVEL_4K, + PG_LEVEL_2M, + PG_LEVEL_1G, + PG_LEVEL_512G, + PG_LEVEL_NUM +}; + +#endif From patchwork Thu Mar 30 11:49:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77156 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1067714vqo; Thu, 30 Mar 2023 04:58:33 -0700 (PDT) X-Google-Smtp-Source: AKy350YstJvIIFlHbqDPi2CD7N8Pc2V3eUm/k0GxSVEmWp5ekI3eXNYmjTURqS05tEcJ5KTdTXrr X-Received: by 2002:a17:902:f9cc:b0:1a1:8860:70d7 with SMTP id kz12-20020a170902f9cc00b001a1886070d7mr19145565plb.48.1680177513322; Thu, 30 Mar 2023 04:58:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177513; cv=none; d=google.com; s=arc-20160816; b=h8YlOMhR7T+IKj8nY/h8KXnP1odMOSYEoNquX6+vk7NzbPVr7bFAqsJDZdxrei5CVf TQe64IzLSKjd9oNMQIIZF6CAhGbOuglifdww+KJxftVznBbCVpl6rDmLv94KyO3PzZo5 m0EESBP+r/ckLABuBr/fSv9UjRBdzUgCb061oe9HVEEZtMXIO3/Ge2QaEq+f42XDgig3 p4liqqPr78zks2s2wkxk5A6JbzfUuXn1s0jaq9ctY7iT0zC45EXt0qXTENcrPyNpRkxw k4G3YbH56fMdUeePW+eKcS658AQ3eFnEs6fSVS9wlwX0c5q8CkEulzmNdimAfa83LhqE XL9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CHDV823rZOUn0+ZJELanQGogydDmakICM2zRSlXwzXs=; b=DqXqRu5F1bbWQ7WfX6GEp4PqEePiESvykFTV+iQ4qFkfHa9QHiVHZBF5veNlZfXBhE OyJUmMqy3hxYn7pC14taKU87W2nImFyE3MGnL/l4SB+/ZdBhDdBwcYvtgCcpt10d2Dwl 7doYvHSark4V+Q4Gu2JapcZEABqhAMJTxn4zO/KvA20gWLksXPk9aDfVyFcSIInrX+4d UaLoRtdzFdo+db7757m2UfBFhlkUZ6X5QDUQ0U0Pi7ixkhH9rc1pYyFLZWkZhMNxBHSr 51vhTXwTgFKhzon8nZdXcIsFg099YjtcOXaSx9XEv2gJdclxS6foT4DlfWhN5rFsWAKf TzXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jfMtOPog; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w4-20020a656944000000b0051254ec023esi18692398pgq.44.2023.03.30.04.58.18; Thu, 30 Mar 2023 04:58:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jfMtOPog; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231357AbjC3Lu2 (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231583AbjC3LuX (ORCPT ); Thu, 30 Mar 2023 07:50:23 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB7C310C3; Thu, 30 Mar 2023 04:50:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177020; x=1711713020; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CrAH2QrN2ySQ5GvkIHo7a3X9ICKho3MAnrqOOPhg1H8=; b=jfMtOPogc7ozZZhPMrmTnwhywjd3q5fuvZqVf3rGPq0XCaFFLtsNZlmD NHVgO2xozHphc8tvyjq4IilTZKBQ0vDcqpKO2odi5e6Rm/4xUABp52mdm BrDc9JoSUOg55x212pVU0KuPQK0G6xCiUTYTSanJEKokjArQHmJpjaNll +m4BhVghj47jg6xyAefK00fsD7/bFBR9xjEGv2QEHsWqVf0ZmwEZvui3k GihjNfWyZCfCLVYme3IoOonDMd+DtQaKeNMC6aZwK2DEFrXmLO8E9592A OdohLjnvcz4SXTye2N8iDIs5i2y7qgp1WqYbtUIBfkdF7yH21cJ6/I396 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="339868458" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="339868458" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="1014401439" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="1014401439" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:12 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 3301A104454; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv9 07/14] efi/x86: Implement support for unaccepted memory Date: Thu, 30 Mar 2023 14:49:49 +0300 Message-Id: <20230330114956.20342-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793816225495301?= X-GMAIL-MSGID: =?utf-8?q?1761793816225495301?= UEFI Specification version 2.9 introduces the concept of memory acceptance: Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, requiring memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific for the Virtual Machine platform. Accepting memory is costly and it makes VMM allocate memory for the accepted guest physical address range. It's better to postpone memory acceptance until memory is needed. It lowers boot time and reduces memory overhead. The kernel needs to know what memory has been accepted. Firmware communicates this information via memory map: a new memory type -- EFI_UNACCEPTED_MEMORY -- indicates such memory. Range-based tracking works fine for firmware, but it gets bulky for the kernel: e820 has to be modified on every page acceptance. It leads to table fragmentation, but there's a limited number of entries in the e820 table Another option is to mark such memory as usable in e820 and track if the range has been accepted in a bitmap. One bit in the bitmap represents 2MiB in the address space: one 4k page is enough to track 64GiB or physical address space. In the worst-case scenario -- a huge hole in the middle of the address space -- It needs 256MiB to handle 4PiB of the address space. Any unaccepted memory that is not aligned to 2M gets accepted upfront. The bitmap is allocated and constructed in the EFI stub and passed down to the kernel via boot_params. allocate_e820() allocates the bitmap if unaccepted memory is present, according to the maximum address in the memory map. Signed-off-by: Kirill A. Shutemov --- Documentation/x86/zero-page.rst | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/boot/compressed/mem.c | 73 ++++++++++++++++++++++++ arch/x86/include/asm/unaccepted_memory.h | 10 ++++ arch/x86/include/uapi/asm/bootparam.h | 2 +- drivers/firmware/efi/Kconfig | 14 +++++ drivers/firmware/efi/efi.c | 1 + drivers/firmware/efi/libstub/x86-stub.c | 65 +++++++++++++++++++++ include/linux/efi.h | 3 +- 9 files changed, 168 insertions(+), 2 deletions(-) create mode 100644 arch/x86/boot/compressed/mem.c create mode 100644 arch/x86/include/asm/unaccepted_memory.h diff --git a/Documentation/x86/zero-page.rst b/Documentation/x86/zero-page.rst index 45aa9cceb4f1..f21905e61ade 100644 --- a/Documentation/x86/zero-page.rst +++ b/Documentation/x86/zero-page.rst @@ -20,6 +20,7 @@ Offset/Size Proto Name Meaning 060/010 ALL ist_info Intel SpeedStep (IST) BIOS support information (struct ist_info) 070/008 ALL acpi_rsdp_addr Physical address of ACPI RSDP table +078/008 ALL unaccepted_memory Bitmap of unaccepted memory (1bit == 2M) 080/010 ALL hd0_info hd0 disk parameter, OBSOLETE!! 090/010 ALL hd1_info hd1 disk parameter, OBSOLETE!! 0A0/010 ALL sys_desc_table System description table (struct sys_desc_table), diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 6b6cfe607bdb..f62c02348f9a 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -107,6 +107,7 @@ endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o +vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/mem.o vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c new file mode 100644 index 000000000000..6b15a0ed8b54 --- /dev/null +++ b/arch/x86/boot/compressed/mem.c @@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "../cpuflags.h" +#include "bitmap.h" +#include "error.h" +#include "math.h" + +#define PMD_SHIFT 21 +#define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE - 1)) + +static inline void __accept_memory(phys_addr_t start, phys_addr_t end) +{ + /* Platform-specific memory-acceptance call goes here */ + error("Cannot accept memory"); +} + +/* + * The accepted memory bitmap only works at PMD_SIZE granularity. Take + * unaligned start/end addresses and either: + * 1. Accepts the memory immediately and in its entirety + * 2. Accepts unaligned parts, and marks *some* aligned part unaccepted + * + * The function will never reach the bitmap_set() with zero bits to set. + */ +void process_unaccepted_memory(struct boot_params *params, u64 start, u64 end) +{ + /* + * Ensure that at least one bit will be set in the bitmap by + * immediately accepting all regions under 2*PMD_SIZE. This is + * imprecise and may immediately accept some areas that could + * have been represented in the bitmap. But, results in simpler + * code below + * + * Consider case like this: + * + * | 4k | 2044k | 2048k | + * ^ 0x0 ^ 2MB ^ 4MB + * + * Only the first 4k has been accepted. The 0MB->2MB region can not be + * represented in the bitmap. The 2MB->4MB region can be represented in + * the bitmap. But, the 0MB->4MB region is <2*PMD_SIZE and will be + * immediately accepted in its entirety. + */ + if (end - start < 2 * PMD_SIZE) { + __accept_memory(start, end); + return; + } + + /* + * No matter how the start and end are aligned, at least one unaccepted + * PMD_SIZE area will remain to be marked in the bitmap. + */ + + /* Immediately accept a unaccepted_memory, + start / PMD_SIZE, (end - start) / PMD_SIZE); +} diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h new file mode 100644 index 000000000000..df0736d32858 --- /dev/null +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2020 Intel Corporation */ +#ifndef _ASM_X86_UNACCEPTED_MEMORY_H +#define _ASM_X86_UNACCEPTED_MEMORY_H + +struct boot_params; + +void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); + +#endif diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h index 01d19fc22346..630a54046af0 100644 --- a/arch/x86/include/uapi/asm/bootparam.h +++ b/arch/x86/include/uapi/asm/bootparam.h @@ -189,7 +189,7 @@ struct boot_params { __u64 tboot_addr; /* 0x058 */ struct ist_info ist_info; /* 0x060 */ __u64 acpi_rsdp_addr; /* 0x070 */ - __u8 _pad3[8]; /* 0x078 */ + __u64 unaccepted_memory; /* 0x078 */ __u8 hd0_info[16]; /* obsolete! */ /* 0x080 */ __u8 hd1_info[16]; /* obsolete! */ /* 0x090 */ struct sys_desc_table sys_desc_table; /* obsolete! */ /* 0x0a0 */ diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig index 043ca31c114e..231f1c70d1db 100644 --- a/drivers/firmware/efi/Kconfig +++ b/drivers/firmware/efi/Kconfig @@ -269,6 +269,20 @@ config EFI_COCO_SECRET virt/coco/efi_secret module to access the secrets, which in turn allows userspace programs to access the injected secrets. +config UNACCEPTED_MEMORY + bool + depends on EFI_STUB + help + Some Virtual Machine platforms, such as Intel TDX, require + some memory to be "accepted" by the guest before it can be used. + This mechanism helps prevent malicious hosts from making changes + to guest memory. + + UEFI specification v2.9 introduced EFI_UNACCEPTED_MEMORY memory type. + + This option adds support for unaccepted memory and makes such memory + usable by the kernel. + config EFI_EMBEDDED_FIRMWARE bool select CRYPTO_LIB_SHA256 diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index abeff7dc0b58..7dce06e419c5 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -843,6 +843,7 @@ static __initdata char memory_type_name[][13] = { "MMIO Port", "PAL Code", "Persistent", + "Unaccepted", }; char * __init efi_md_typeattr_format(char *buf, size_t size, diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index fff81843169c..1643ddbde249 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -15,6 +15,7 @@ #include #include #include +#include #include "efistub.h" @@ -613,6 +614,16 @@ setup_e820(struct boot_params *params, struct setup_data *e820ext, u32 e820ext_s e820_type = E820_TYPE_PMEM; break; + case EFI_UNACCEPTED_MEMORY: + if (!IS_ENABLED(CONFIG_UNACCEPTED_MEMORY)) { + efi_warn_once( +"The system has unaccepted memory, but kernel does not support it\nConsider enabling CONFIG_UNACCEPTED_MEMORY\n"); + continue; + } + e820_type = E820_TYPE_RAM; + process_unaccepted_memory(params, d->phys_addr, + d->phys_addr + PAGE_SIZE * d->num_pages); + break; default: continue; } @@ -677,6 +688,57 @@ static efi_status_t alloc_e820ext(u32 nr_desc, struct setup_data **e820ext, return status; } +static efi_status_t allocate_unaccepted_bitmap(struct boot_params *params, + __u32 nr_desc, + struct efi_boot_memmap *map) +{ + unsigned long *mem = NULL; + u64 size, max_addr = 0; + efi_status_t status; + bool found = false; + int i; + + /* Check if there's any unaccepted memory and find the max address */ + for (i = 0; i < nr_desc; i++) { + efi_memory_desc_t *d; + unsigned long m = (unsigned long)map->map; + + d = efi_early_memdesc_ptr(m, map->desc_size, i); + if (d->type == EFI_UNACCEPTED_MEMORY) + found = true; + if (d->phys_addr + d->num_pages * PAGE_SIZE > max_addr) + max_addr = d->phys_addr + d->num_pages * PAGE_SIZE; + } + + if (!found) { + params->unaccepted_memory = 0; + return EFI_SUCCESS; + } + + /* + * If unaccepted memory is present, allocate a bitmap to track what + * memory has to be accepted before access. + * + * One bit in the bitmap represents 2MiB in the address space: + * A 4k bitmap can track 64GiB of physical address space. + * + * In the worst case scenario -- a huge hole in the middle of the + * address space -- It needs 256MiB to handle 4PiB of the address + * space. + * + * The bitmap will be populated in setup_e820() according to the memory + * map after efi_exit_boot_services(). + */ + size = DIV_ROUND_UP(max_addr, PMD_SIZE * BITS_PER_BYTE); + status = efi_allocate_pages(size, (unsigned long *)&mem, ULONG_MAX); + if (status == EFI_SUCCESS) { + memset(mem, 0, size); + params->unaccepted_memory = (unsigned long)mem; + } + + return status; +} + static efi_status_t allocate_e820(struct boot_params *params, struct setup_data **e820ext, u32 *e820ext_size) @@ -697,6 +759,9 @@ static efi_status_t allocate_e820(struct boot_params *params, status = alloc_e820ext(nr_e820ext, e820ext, e820ext_size); } + if (IS_ENABLED(CONFIG_UNACCEPTED_MEMORY) && status == EFI_SUCCESS) + status = allocate_unaccepted_bitmap(params, nr_desc, map); + efi_bs_call(free_pool, map); return status; } diff --git a/include/linux/efi.h b/include/linux/efi.h index 04a733f0ba95..1d4f0343c710 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -108,7 +108,8 @@ typedef struct { #define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12 #define EFI_PAL_CODE 13 #define EFI_PERSISTENT_MEMORY 14 -#define EFI_MAX_MEMORY_TYPE 15 +#define EFI_UNACCEPTED_MEMORY 15 +#define EFI_MAX_MEMORY_TYPE 16 /* Attribute values: */ #define EFI_MEMORY_UC ((u64)0x0000000000000001ULL) /* uncached */ From patchwork Thu Mar 30 11:49:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77154 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1067314vqo; Thu, 30 Mar 2023 04:57:40 -0700 (PDT) X-Google-Smtp-Source: AKy350a66opna0lzf/WWL4bDo/fXARKks+7JZasJwo38ake2OIxwAOoZ0aE6VgR1EfpS7hMSAw9h X-Received: by 2002:a62:5210:0:b0:5d9:f3a6:a925 with SMTP id g16-20020a625210000000b005d9f3a6a925mr19680201pfb.24.1680177460552; Thu, 30 Mar 2023 04:57:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177460; cv=none; d=google.com; s=arc-20160816; b=Hh+q5FSuKA8cLMBZKyTPLIh0S8yIu9dOcwcWXMTQjXaSZ+jFUSTfIvnJbA/mPPVFlj PUBiql7IEE/GYm10Vh/fgslnZdeyieCrTCt4DCEm6jfvNxosDbBDaqLtiTMb9tXsqY9z 5jIAlNKff1CwVKp5TusMlyuzENUeivVyHwXIwT78nYYgRe7y/RKS118Y/q7rCtx4xaXH 6wGb2WGw6Cqe2Js9VoZey3ho/qW6/0dSaUfkrND6uSDhgjjviR1fba6ywQJx5mNaJJCT brHZTMf8AJqejLlXOLdoQQe9G1ffSWjrdD/3Bs7EiVglhhsfpkq1qcjODJD993hh+nVT Uwlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sJ0jWyXBmb0hmPD/qh9m7QTKFLzmCPwA0IjlFxXM8+0=; b=nTfXjs0NlgzESjexxUWtUByEguQEmHNNI+IkN/CLqJyctRVPyXbgoZjmFT/RbGOvmr IoZ1fo4PRSQmGyfmigzsNjSGVZhnIgVzdhZGuabb9N1koFvCmu8j+zWuaHRkMp1xlaLp 0KwzNWbRdkVW7I5WbKagp3jSm+6AiNN7MFH+yWtYsy6vtJtQFa89xqu1G2o+y3Cw1zct GpGYP6dN7rRdasxs6fFxosDB9oVxGa82aDTFSxuKtsSE71JgMjuhDexZWZ3/1dtn+G0x d7g+hD2odMeu4GvR0NnxWyFz7UqY4+Goq5kC1j/IN85GO0K9rZ3++MDuYsyfQj6+kIDT UWvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HEaNTLXk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y23-20020a634b17000000b0050a16d20f0bsi35707793pga.696.2023.03.30.04.57.27; Thu, 30 Mar 2023 04:57:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HEaNTLXk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231773AbjC3Lv0 (ORCPT + 99 others); Thu, 30 Mar 2023 07:51:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231702AbjC3LvA (ORCPT ); Thu, 30 Mar 2023 07:51:00 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C20D93E1; Thu, 30 Mar 2023 04:50:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177037; x=1711713037; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=d4/KfDFRLyY7sMUS/zMPpAEAi7ubl+P+2mX+6hzyiNg=; b=HEaNTLXkEQvLPiwN7l4iWi5nZmb18tGj7151ZUd4Elj7oYZBJedOnkKh Q+0b0IOrx4b9r/BnPfK7VM9WxsNI437J8dWtMQPUFe5Z1CHGPRtCfscHC 6v1t10Tk79ZVHUECmo8RqnDYuWuft/EK2Z+hHe7nn6t1EFSXYIDct0fjr PP8ZVtoAmHfMMRULVHCh7eQsBk7fBao4JNwjmTN8htXMSIoOYZaKLRAUa uvMsoOZcegItKKxW/MGLsuBR5+jI5MjD1dFN0sGwKLOhprrYm9V9OXQCN pUgdCJmkNPzzebgwlKCh9w5MR7kg0ujU234ZZ6A84waNnZT/EWB6l+ymW w==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="342756789" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="342756789" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="634856504" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="634856504" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:16 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 3E8F41044F3; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv9 08/14] x86/boot/compressed: Handle unaccepted memory Date: Thu, 30 Mar 2023 14:49:50 +0300 Message-Id: <20230330114956.20342-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793760702838505?= X-GMAIL-MSGID: =?utf-8?q?1761793760702838505?= The firmware will pre-accept the memory used to run the stub. But, the stub is responsible for accepting the memory into which it decompresses the main kernel. Accept memory just before decompression starts. The stub is also responsible for choosing a physical address in which to place the decompressed kernel image. The KASLR mechanism will randomize this physical address. Since the unaccepted memory region is relatively small, KASLR would be quite ineffective if it only used the pre-accepted area (EFI_CONVENTIONAL_MEMORY). Ensure that KASLR randomizes among the entire physical address space by also including EFI_UNACCEPTED_MEMORY. Signed-off-by: Kirill A. Shutemov --- arch/x86/boot/compressed/Makefile | 2 +- arch/x86/boot/compressed/efi.h | 1 + arch/x86/boot/compressed/kaslr.c | 35 ++++++++++++++++-------- arch/x86/boot/compressed/mem.c | 18 ++++++++++++ arch/x86/boot/compressed/misc.c | 6 ++++ arch/x86/boot/compressed/misc.h | 6 ++++ arch/x86/include/asm/unaccepted_memory.h | 2 ++ 7 files changed, 57 insertions(+), 13 deletions(-) diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index f62c02348f9a..74f7adee46ad 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -107,7 +107,7 @@ endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o -vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/mem.o +vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/find.o $(obj)/mem.o vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o diff --git a/arch/x86/boot/compressed/efi.h b/arch/x86/boot/compressed/efi.h index 7db2f41b54cd..cf475243b6d5 100644 --- a/arch/x86/boot/compressed/efi.h +++ b/arch/x86/boot/compressed/efi.h @@ -32,6 +32,7 @@ typedef struct { } efi_table_hdr_t; #define EFI_CONVENTIONAL_MEMORY 7 +#define EFI_UNACCEPTED_MEMORY 15 #define EFI_MEMORY_MORE_RELIABLE \ ((u64)0x0000000000010000ULL) /* higher reliability */ diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index 454757fbdfe5..749f0fe7e446 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -672,6 +672,28 @@ static bool process_mem_region(struct mem_vector *region, } #ifdef CONFIG_EFI + +/* + * Only EFI_CONVENTIONAL_MEMORY and EFI_UNACCEPTED_MEMORY (if supported) are + * guaranteed to be free. + * + * It is more conservative in picking free memory than the EFI spec allows: + * + * According to the spec, EFI_BOOT_SERVICES_{CODE|DATA} are also free memory + * and thus available to place the kernel image into, but in practice there's + * firmware where using that memory leads to crashes. + */ +static inline bool memory_type_is_free(efi_memory_desc_t *md) +{ + if (md->type == EFI_CONVENTIONAL_MEMORY) + return true; + + if (md->type == EFI_UNACCEPTED_MEMORY) + return IS_ENABLED(CONFIG_UNACCEPTED_MEMORY); + + return false; +} + /* * Returns true if we processed the EFI memmap, which we prefer over the E820 * table if it is available. @@ -716,18 +738,7 @@ process_efi_entries(unsigned long minimum, unsigned long image_size) for (i = 0; i < nr_desc; i++) { md = efi_early_memdesc_ptr(pmap, e->efi_memdesc_size, i); - /* - * Here we are more conservative in picking free memory than - * the EFI spec allows: - * - * According to the spec, EFI_BOOT_SERVICES_{CODE|DATA} are also - * free memory and thus available to place the kernel image into, - * but in practice there's firmware where using that memory leads - * to crashes. - * - * Only EFI_CONVENTIONAL_MEMORY is guaranteed to be free. - */ - if (md->type != EFI_CONVENTIONAL_MEMORY) + if (!memory_type_is_free(md)) continue; if (efi_soft_reserve_enabled() && diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c index 6b15a0ed8b54..de858a5180b6 100644 --- a/arch/x86/boot/compressed/mem.c +++ b/arch/x86/boot/compressed/mem.c @@ -3,12 +3,15 @@ #include "../cpuflags.h" #include "bitmap.h" #include "error.h" +#include "find.h" #include "math.h" #define PMD_SHIFT 21 #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE - 1)) +extern struct boot_params *boot_params; + static inline void __accept_memory(phys_addr_t start, phys_addr_t end) { /* Platform-specific memory-acceptance call goes here */ @@ -71,3 +74,18 @@ void process_unaccepted_memory(struct boot_params *params, u64 start, u64 end) bitmap_set((unsigned long *)params->unaccepted_memory, start / PMD_SIZE, (end - start) / PMD_SIZE); } + +void accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long range_start, range_end; + unsigned long *bitmap, bitmap_size; + + bitmap = (unsigned long *)boot_params->unaccepted_memory; + range_start = start / PMD_SIZE; + bitmap_size = DIV_ROUND_UP(end, PMD_SIZE); + + for_each_set_bitrange_from(range_start, range_end, bitmap, bitmap_size) { + __accept_memory(range_start * PMD_SIZE, range_end * PMD_SIZE); + bitmap_clear(bitmap, range_start, range_end - range_start); + } +} diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c index 014ff222bf4b..186bfd53e042 100644 --- a/arch/x86/boot/compressed/misc.c +++ b/arch/x86/boot/compressed/misc.c @@ -455,6 +455,12 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap, #endif debug_putstr("\nDecompressing Linux... "); + + if (boot_params->unaccepted_memory) { + debug_putstr("Accepting memory... "); + accept_memory(__pa(output), __pa(output) + needed_size); + } + __decompress(input_data, input_len, NULL, NULL, output, output_len, NULL, error); entry_offset = parse_elf(output); diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h index 2f155a0e3041..9663d1839f54 100644 --- a/arch/x86/boot/compressed/misc.h +++ b/arch/x86/boot/compressed/misc.h @@ -247,4 +247,10 @@ static inline unsigned long efi_find_vendor_table(struct boot_params *bp, } #endif /* CONFIG_EFI */ +#ifdef CONFIG_UNACCEPTED_MEMORY +void accept_memory(phys_addr_t start, phys_addr_t end); +#else +static inline void accept_memory(phys_addr_t start, phys_addr_t end) {} +#endif + #endif /* BOOT_COMPRESSED_MISC_H */ diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index df0736d32858..41fbfc798100 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -7,4 +7,6 @@ struct boot_params; void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); +void accept_memory(phys_addr_t start, phys_addr_t end); + #endif From patchwork Thu Mar 30 11:49:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77146 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1065232vqo; Thu, 30 Mar 2023 04:53:08 -0700 (PDT) X-Google-Smtp-Source: AKy350ZsXwvfpSe0vsRmNDUZNU/0E06wLpsZs1w0qZr4EVuuOeEoSbTOODO4VCv/xKoEelgRgL0b X-Received: by 2002:a17:902:e743:b0:1a2:19c1:a96d with SMTP id p3-20020a170902e74300b001a219c1a96dmr6767091plf.23.1680177188187; Thu, 30 Mar 2023 04:53:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177188; cv=none; d=google.com; s=arc-20160816; b=cCKBZm+wvlAwqhoMkXcyo0Gbp3OGxs19uqgv58d9ce7Ji5Osmq3lt0GrxFxB0LOUp0 McGOdtTpBwKrLt8XAV7Qw3GOqXeav33jApX/5Sz2Sdm3V13xUkaHDgK4O9BgQBJN+s9m eUXFHtAL8o/ZY925jukVjBYnxsECRJ9exRnUjnz+C3JqFz1kB8HX4wuLVX+/NJ4Re7Wn PFz2mOZoAnwBhLXwHgwepDRNC1Q0AHypUHq1AyDKG81ySsLLPU2IrVDZFQy4c0zLHSaX n6j6Aa37k1R07F/8dQTv2Xr+b0bFob8nNktK4cV1gHRtYv3bWcD7rpp1WI5yZlQu3qNR hdAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CDsQGJ5mVHwKQTSqTirUAmTn1KY5xyP0qi9c6MOAeR0=; b=xWMKDbllLersBqSz/uz1PslYL28VrB3XoY6NT3TznL1Iah//edwl9qk5JtFtr7ZAm4 lD+96DN0llH5S3iWhQlLCxJzLYoVk3k7yf1k7pPQIxRamSRJKeurmFl2WgOalwhbgTWs f4xa23iClLReiGMuO6xGcMINS9coJb8YM1wKprYH5R1XV9/vRd5AzdX2AFKvc+4F2aT8 ChH4BFJ2tbd9BxLRUev3N9XH6iDMob0aJaZqk6ZtFU0FHwGNMSDRGMQczFTVptwADROD gzILsh2UgVsaA7QLbQpZBTCnteEWvCJ3PWbE3Q57MtHS01yZVjGDtHrB0wXULJ9tRw9l B5VA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G3v8P3ru; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n12-20020a170903110c00b001a27af16626si1338189plh.569.2023.03.30.04.52.55; Thu, 30 Mar 2023 04:53:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G3v8P3ru; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229436AbjC3Lue (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231159AbjC3Lu1 (ORCPT ); Thu, 30 Mar 2023 07:50:27 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36BE2A24B; Thu, 30 Mar 2023 04:50:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177026; x=1711713026; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f7/hrgLyEe7uCgqzPwQBAiXJTS6cTMMnuaKLYQlQP9E=; b=G3v8P3ru+eaQYI7MqNa9vb+Hy499tRdtoMOlKZq9wzcolQVJpqIHwEgb LaH0yrgdJTc3rbq+tzAm0G3G8KhCWNrPxmGk9y3HrsCamrqtyFGw3sOMH +qOa3/Xj8miI5t18P6nXLhoDnStt6l7bu/UiuEF6+Dmpare1Hj0ws11VH CGcIGibNfP5jJCGriwGOdIH/MlBzpnicwRMo6Li15/PlR/BQh3WgP3PWd NcKFglfpg5MvfcLUtQ2hkyMxK5RZVJEYY1y48MOpTb3NNSWR/4E54m53v sp9lNV0qRbFbnIfERimIFYTbOLovQVCB/qr62l2PoODEL68EkUmmewYOk A==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="339868489" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="339868489" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="1014401448" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="1014401448" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:17 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 49CA4104545; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Mike Rapoport Subject: [PATCHv9 09/14] x86/mm: Reserve unaccepted memory bitmap Date: Thu, 30 Mar 2023 14:49:51 +0300 Message-Id: <20230330114956.20342-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793474978255729?= X-GMAIL-MSGID: =?utf-8?q?1761793474978255729?= A given page of memory can only be accepted once. The kernel has to accept memory both in the early decompression stage and during normal runtime. A bitmap is used to communicate the acceptance state of each page between the decompression stage and normal runtime. boot_params is used to communicate location of the bitmap throughout the boot. The bitmap is allocated and initially populated in EFI stub. Decompression stage accepts pages required for kernel/initrd and marks these pages accordingly in the bitmap. The main kernel picks up the bitmap from the same boot_params and uses it to determine what has to be accepted on allocation. In the runtime kernel, reserve the bitmap's memory to ensure nothing overwrites it. The size of bitmap is determined with e820__end_of_ram_pfn() which relies on setup_e820() marking unaccepted memory as E820_TYPE_RAM. Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport --- arch/x86/kernel/e820.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index fb8cf953380d..483c36a28d2e 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1316,6 +1316,23 @@ void __init e820__memblock_setup(void) int i; u64 end; + /* + * Mark unaccepted memory bitmap reserved. + * + * This kind of reservation usually done from early_reserve_memory(), + * but early_reserve_memory() called before e820__memory_setup(), so + * e820_table is not finalized and e820__end_of_ram_pfn() cannot be + * used to get correct RAM size. + */ + if (boot_params.unaccepted_memory) { + unsigned long size; + + /* One bit per 2MB */ + size = DIV_ROUND_UP(e820__end_of_ram_pfn() * PAGE_SIZE, + PMD_SIZE * BITS_PER_BYTE); + memblock_reserve(boot_params.unaccepted_memory, size); + } + /* * The bootstrap memblock region count maximum is 128 entries * (INIT_MEMBLOCK_REGIONS), but EFI might pass us more E820 entries From patchwork Thu Mar 30 11:49:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77150 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1066913vqo; Thu, 30 Mar 2023 04:56:47 -0700 (PDT) X-Google-Smtp-Source: AKy350ZLVPbDSGnji/0EkqbFmVuCQ+4yv9DXgO6l1uOho0wHV66t5cPluSugf3BmRLyExMqzU6KA X-Received: by 2002:a17:903:2343:b0:1a1:ee8c:eef8 with SMTP id c3-20020a170903234300b001a1ee8ceef8mr28262158plh.2.1680177406806; Thu, 30 Mar 2023 04:56:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177406; cv=none; d=google.com; s=arc-20160816; b=afoi0DXS2uxc7OkEVwqbXLJ0Oo/aQ/Z3FX6/sX8mUegcAT6g4lYfBG1mZBiPi1DPE1 MXz//XeJ74NpWADQEY7TC8B0FY8rw2VwSJMHtRerMDa0s/f7cwoAUxFmQEre8zcmTASJ l9WQO3I8YCJLE9nVW6OaTNFcUUK8z6QXUxUBTWDk1sbNEro4sFva05TW/M7PPNqqiW2r 713M4LAqtPaZ3IsxHEnPzg4nbQcP04WWgr4FKKAlPS+UkDrE55kU1WflHFxEMYNqxQbO HeYP9fFcn+24kVksPFAcGCp4XSs90AsnbAsiyIhJnF3xCsnLZaaL7ZDvmSdxxydroHfT cNRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4c+6RTABm8IpgO4JlTWZJ1FT6J6ytqIY0EMrGiPzEdU=; b=MzfXTlzCMtSf4w03OoY+fT4euXDf/HT3f40nTiuURkfTopYrO76cPPBSA1U3OFq4S4 mSSczssw1/YkGWhbTpKNLSPJtaR7a5YASx/FxC35rU1ZmBYwUc2OxLtGVnLxA83VRodI TfMPsqYUFK/RsgMQGuUx6UAJwNBeT8GPrk+drdNydyhLPSq4LNtbannT0JUcI3rx16wI nipMuUQDewLKaH3QF6K2cgefJ14e+DVqnjtjJAHRjPaLrqFTtJWPn/zQx/QG4Z30Mqnj WLgqT+RDjie6KksgoDTVvKekRaF0BCfCsDQlYgscW3z/L4bSUyHgIdHvCpNL49lgDFxR RPTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MVWl1k1W; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jw12-20020a170903278c00b0019f359c651esi33076994plb.556.2023.03.30.04.56.34; Thu, 30 Mar 2023 04:56:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MVWl1k1W; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231839AbjC3Lus (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231499AbjC3Lu2 (ORCPT ); Thu, 30 Mar 2023 07:50:28 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21FD6900D; Thu, 30 Mar 2023 04:50:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177027; x=1711713027; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EdpB5LGN65LHmZVLPoPRNthJZeU3YLHWwC+hwkbMqQk=; b=MVWl1k1WUDMU4a/R81WPZZDk5VJufX9wbZwKQRQascuQDZKVcQFW+46p IZ9nExMHfiWZqedPnMHGELbAUK5iGJzuUCe6ryrDElWkmcD5mGkbcr2T2 UdEmnP9pEKNPkDkWKgTtntOLDfP5mX2iSy/R+UFqX3Faqz9boUNaCMRcM PoHyP2EVIrBIrhuKXSjFqPZVXYSf7FMByZdSiQOSSEb/3OyB2lTn4nzJt un3Ccq/7dD6P7HceVbnsE2l5GJyqZomX58YO8spQfyNAW8dzzxqXy8dbJ E6eibpj4tSxSHaPpbPJnBseG/TvSn+8HThXUDBjOPpHiVm4ydR23Stb4r Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="339868511" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="339868511" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="1014401451" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="1014401451" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:17 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 54CD11046EE; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv9 10/14] x86/mm: Provide helpers for unaccepted memory Date: Thu, 30 Mar 2023 14:49:52 +0300 Message-Id: <20230330114956.20342-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793704165299018?= X-GMAIL-MSGID: =?utf-8?q?1761793704165299018?= Core-mm requires few helpers to support unaccepted memory: - accept_memory() checks the range of addresses against the bitmap and accept memory if needed. - range_contains_unaccepted_memory() checks if anything within the range requires acceptance. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/page.h | 3 ++ arch/x86/include/asm/unaccepted_memory.h | 4 ++ arch/x86/mm/Makefile | 2 + arch/x86/mm/unaccepted_memory.c | 61 ++++++++++++++++++++++++ 4 files changed, 70 insertions(+) create mode 100644 arch/x86/mm/unaccepted_memory.c diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index d18e5c332cb9..4bab2bb2c9c0 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -19,6 +19,9 @@ struct page; #include + +#include + extern struct range pfn_mapped[]; extern int nr_pfn_mapped; diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index 41fbfc798100..89fc91c61560 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -7,6 +7,10 @@ struct boot_params; void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); +#ifdef CONFIG_UNACCEPTED_MEMORY + void accept_memory(phys_addr_t start, phys_addr_t end); +bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end); #endif +#endif diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index c80febc44cd2..b0ef1755e5c8 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -67,3 +67,5 @@ obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o + +obj-$(CONFIG_UNACCEPTED_MEMORY) += unaccepted_memory.o diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c new file mode 100644 index 000000000000..1df918b21469 --- /dev/null +++ b/arch/x86/mm/unaccepted_memory.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include + +#include +#include +#include + +/* Protects unaccepted memory bitmap */ +static DEFINE_SPINLOCK(unaccepted_memory_lock); + +void accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long range_start, range_end; + unsigned long *bitmap; + unsigned long flags; + + if (!boot_params.unaccepted_memory) + return; + + bitmap = __va(boot_params.unaccepted_memory); + range_start = start / PMD_SIZE; + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + for_each_set_bitrange_from(range_start, range_end, bitmap, + DIV_ROUND_UP(end, PMD_SIZE)) { + unsigned long len = range_end - range_start; + + /* Platform-specific memory-acceptance call goes here */ + panic("Cannot accept memory: unknown platform\n"); + bitmap_clear(bitmap, range_start, len); + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); +} + +bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long *bitmap; + unsigned long flags; + bool ret = false; + + if (!boot_params.unaccepted_memory) + return 0; + + bitmap = __va(boot_params.unaccepted_memory); + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + while (start < end) { + if (test_bit(start / PMD_SIZE, bitmap)) { + ret = true; + break; + } + + start += PMD_SIZE; + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + + return ret; +} From patchwork Thu Mar 30 11:49:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77157 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1068050vqo; Thu, 30 Mar 2023 04:59:21 -0700 (PDT) X-Google-Smtp-Source: AK7set/9xtPcGAJ5ZoBvvyllO2IRzdzzysAPdAV/n0bIhiAmCiNaLbzZCEiaQ99NH/HJcQCxSYKE X-Received: by 2002:a05:6a20:718a:b0:d9:a792:8e3d with SMTP id s10-20020a056a20718a00b000d9a7928e3dmr20579518pzb.30.1680177561480; Thu, 30 Mar 2023 04:59:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177561; cv=none; d=google.com; s=arc-20160816; b=UxR5Xfi/3pUGQ0+R9NrRg54cAoYlg6u1I1kuPu3JHIlMK8sbWEW9hLxtAYKWpsrEwD qzK5c7wakGJI/riCWxIulkalNjk/F0Ug4Kp5iHuTdmmVzj6Jt8KJDHMWJRxi/+/pT8+F CO2cu5BrKWj8MB7loJFfpJ4H8hbwG/0G9Lf0h+TwW5Gp1v5FyC8b3mZ6bgRH+CscoD/4 n82FRoovnjyf2SweK+nfwDsTOPTrcHmTGVL4TkaBcNL2Xuvh2JcPtI6edLuOa2jTxLDH tv7IcEI1sJ4yMz4pNfTa4hU0uAmw6tsxYh8x3+26yHE+RgIwQCpOR6APVfnFWjcLJF1k X07A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8Nf7y2H48SvI7vjZmdAL4egfrBNrBmenWjONYlXJG6k=; b=puyQ63RXx2DEA76LHDng/Nqigq0LCKYTrGVJDDWUVQ4b0Yuoo9iY00a+eHTNvNi9zH XYiuvKdJyXPvSnVCCQxAGDRF1fXfHnCrz0c3A1rUqky0hc1CzgKsMKg2O2BFiSa3ISA0 CiHSW2dITbnt3Tpfu/ALTksb3xYwHBjjmQtrtyWVPw5sMWw14staPWj7GuvdYMwF7HMB T0G7UJn67oAvjX0jgt8cuQhTngRTJfWMDI2wiVjjj+F+jnAp82ptzUz7/9AIvtVyfyTn g/zARO/pnNRvVZ4PqIe6DYEHurHZHFk1zXAsgZU7PtqJsheIJ4KRdFyA/V8q57FvvuCe XeSQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WBN+yqjP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j71-20020a63804a000000b004fb33a76e2csi16574676pgd.834.2023.03.30.04.59.07; Thu, 30 Mar 2023 04:59:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WBN+yqjP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231749AbjC3Lv3 (ORCPT + 99 others); Thu, 30 Mar 2023 07:51:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231924AbjC3LvC (ORCPT ); Thu, 30 Mar 2023 07:51:02 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C07B0B465; Thu, 30 Mar 2023 04:50:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177039; x=1711713039; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wDfbjNCC+5BxiTtoxByvPaljGaBvwiQ8TKW6aMXMCQY=; b=WBN+yqjPrwKcOt7gLbOU0E1+xBy7hUOt4UxTzyAj6vBS46pHbLL1b88Y Ern03CBqHKIGRf/ZD2SYnpoeuQSRvINoq8xwBGfTh0fXPMM8spWOT1GCY cVhAvo3ug5zBT5xf4XsqCr08mxe5YmSTr+VBgLFT2oSzrQDHn+Ju1/R50 blhxjRZfceCSK51TTFeEVN0SFBEukl/p87TJTW8+QYMK//KthkA34wp0Z nHTcv09xoWzToWZtuw4ORlHAGaC9KfV3Ob76RIzPifX4DwhhPgG/kT9nv RhoGDkM0YY/jY10dUzvtn3S+NtRa7QqBEzGAYEbVBRTzBoZ6ncH7xTXuU A==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="342756818" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="342756818" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="634856514" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="634856514" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:18 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 60182104788; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv9 11/14] x86/mm: Avoid load_unaligned_zeropad() stepping into unaccepted memory Date: Thu, 30 Mar 2023 14:49:53 +0300 Message-Id: <20230330114956.20342-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793866450752785?= X-GMAIL-MSGID: =?utf-8?q?1761793866450752785?= load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are three parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+2M if 'end' is aligned on a 2M boundary. It may require checking 2M chunk beyond end of RAM. The bitmap allocation is modified to accommodate this. 2. Implicitly extend accept_memory(start, end) to end+2M if 'end' is aligned on a 2M boundary. 3. Set PageUnaccepted() on both memory that itself needs to be accepted *and* memory where the next page needs to be accepted. Essentially, make PageUnaccepted(page) a marker for whether work needs to be done to make 'page' usable. That work might include accepting pages in addition to 'page' itself. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might have PageUnaccepted() set on them. PageUnaccepted(page) is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov Reviewed-by: Dave Hansen --- arch/x86/mm/unaccepted_memory.c | 39 +++++++++++++++++++++++++ drivers/firmware/efi/libstub/x86-stub.c | 7 +++++ 2 files changed, 46 insertions(+) diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index 1df918b21469..a0a58486eb74 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -23,6 +23,38 @@ void accept_memory(phys_addr_t start, phys_addr_t end) bitmap = __va(boot_params.unaccepted_memory); range_start = start / PMD_SIZE; + /* + * load_unaligned_zeropad() can lead to unwanted loads across page + * boundaries. The unwanted loads are typically harmless. But, they + * might be made to totally unrelated or even unmapped memory. + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now + * #VE) to recover from these unwanted loads. + * + * But, this approach does not work for unaccepted memory. For TDX, a + * load from unaccepted memory will not lead to a recoverable exception + * within the guest. The guest will exit to the VMM where the only + * recourse is to terminate the guest. + * + * There are three parts to fix this issue and comprehensively avoid + * access to unaccepted memory. Together these ensure that an extra + * "guard" page is accepted in addition to the memory that needs to be + * used: + * + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) + * checks up to end+2M if 'end' is aligned on a 2M boundary. + * + * 2. Implicitly extend accept_memory(start, end) to end+2M if 'end' is + * aligned on a 2M boundary. (immediately following this comment) + * + * 3. Set PageUnaccepted() on both memory that itself needs to be + * accepted *and* memory where the next page needs to be accepted. + * Essentially, make PageUnaccepted(page) a marker for whether work + * needs to be done to make 'page' usable. That work might include + * accepting pages in addition to 'page' itself. + */ + if (!(end % PMD_SIZE)) + end += PMD_SIZE; + spin_lock_irqsave(&unaccepted_memory_lock, flags); for_each_set_bitrange_from(range_start, range_end, bitmap, DIV_ROUND_UP(end, PMD_SIZE)) { @@ -46,6 +78,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) bitmap = __va(boot_params.unaccepted_memory); + /* + * Also consider the unaccepted state of the *next* page. See fix #1 in + * the comment on load_unaligned_zeropad() in accept_memory(). + */ + if (!(end % PMD_SIZE)) + end += PMD_SIZE; + spin_lock_irqsave(&unaccepted_memory_lock, flags); while (start < end) { if (test_bit(start / PMD_SIZE, bitmap)) { diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index 1643ddbde249..1afe7b5b02e1 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -715,6 +715,13 @@ static efi_status_t allocate_unaccepted_bitmap(struct boot_params *params, return EFI_SUCCESS; } + /* + * range_contains_unaccepted_memory() may need to check one 2M chunk + * beyond the end of RAM to deal with load_unaligned_zeropad(). Make + * sure that the bitmap is large enough handle it. + */ + max_addr += PMD_SIZE; + /* * If unaccepted memory is present, allocate a bitmap to track what * memory has to be accepted before access. From patchwork Thu Mar 30 11:49:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77151 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1067039vqo; Thu, 30 Mar 2023 04:57:04 -0700 (PDT) X-Google-Smtp-Source: AKy350ZRwFipg96XE81yvlPUEBRyMsleMs8JQlLfVeceF2h8igHh0elJ/pqPInaD5NHUoUt/YsGx X-Received: by 2002:a62:7bcf:0:b0:625:e346:c9e with SMTP id w198-20020a627bcf000000b00625e3460c9emr20234569pfc.6.1680177424198; Thu, 30 Mar 2023 04:57:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177424; cv=none; d=google.com; s=arc-20160816; b=uMnClz5Hpz6vnicjkSqUlvJ4GkU77iTUBLlq4BELJqbyrJ6RB/IRe+mHKZ2kIhde7k r/N41hdSLmiK7Zsw64H9JEKxqhkhQlzTSDpBhokJwhSg5Q5sd3a021mxFdAJwFfMahtu WaZySwQ7mjXpAGgpk7sJ3iILDA+utv7vAxADtMabgRcmS5XpaBDZYf6bsyTJyCdCy5k0 3yoDD+br88T/7c1lXxb6fwAzhtYZjskNET2ixsl5tCdaygd1o1d3jNO0Qvhz6pungdAo S786fa4YKCMiWWpxWGkp6EwyaKMgr95tkWGEV+9NdvzkeUys0MKBITqViQvgdevmlzze YnrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/VRyLYcUziIyvK5rDEWkSsh184m1aPMbQy/X3mwxy8E=; b=OKLlscygCTpM6WztRY3H1ZXU11pGnEAG8Ui/35L61Hs2U+GYqdF2nfflDmzcAs92Mg Bfk1MXVpevr6RpEW20U/VdYXae4ChEmiCjKQG7g7zopSXOWicpFjAHT8opAKkqbtFFCs b81stH9oZmPPDnPOvu9fKknxBo6e7bo9rB1cctogPN7drQJZY08mk1b0LykKfiOFC9Hp 0S4pXLQBUT5nKtAMDqgPpPnErZIWxvGwoEG4wFoKbEAuDK1TnQdvBlvPu+RrsqS8DtZu ZmoB0Tm1wzqEsijHw9nQDYVpZEEboTcaC3lfmLn6syqXHz+BYUr3byE27f97X3s1lhby 7d8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=T+aBvN5n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f188-20020a6238c5000000b0062ae46c1c06si14795071pfa.78.2023.03.30.04.56.50; Thu, 30 Mar 2023 04:57:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=T+aBvN5n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231864AbjC3Luv (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231633AbjC3Lu3 (ORCPT ); Thu, 30 Mar 2023 07:50:29 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B90B902E; Thu, 30 Mar 2023 04:50:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177027; x=1711713027; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aSDNa5qy6AjyVCvAMnDoppa8Rk9jEp1+/qGA3XqX6A0=; b=T+aBvN5nc/jtF6vI8dtXc53cSolI3yegwlWMOGpGUWinLB1HVig2PYMg Rf3fq1051gdzf/W3JlmZcL0AbjyW9dtKMGLmxX4zixrpO14IE7uYu8bNA 1GZ7LeOisxpiRFbB2SLf556E1jBoPaDIIyI6iqfzFZsOM3rTFMTilHuGc EA0whNxlsoV0MFnRP3iGaX5oYaGCn6j2i4umVc+XGPt7cdJXBqlincQeS GFgJnwHpEjlXrnrv2gGUxUnSEJGfKTQnbgX1mPSO3BzYMIVUt1RWbMK8E GyfV+4T0f/XZbAH9Py+bo/8wboV28GUaoehvw9vk0TIRr+lCJTLbmKrb7 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="339868517" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="339868517" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="1014401453" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="1014401453" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:17 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 6BA50104CA8; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv9 12/14] x86/tdx: Make _tdx_hypercall() and __tdx_module_call() available in boot stub Date: Thu, 30 Mar 2023 14:49:54 +0300 Message-Id: <20230330114956.20342-13-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793722657266527?= X-GMAIL-MSGID: =?utf-8?q?1761793722657266527?= Memory acceptance requires a hypercall and one or multiple module calls. Make helpers for the calls available in boot stub. It has to accept memory where kernel image and initrd are placed. Signed-off-by: Kirill A. Shutemov Reviewed-by: Dave Hansen --- arch/x86/coco/tdx/tdx.c | 32 ------------------- arch/x86/include/asm/shared/tdx.h | 51 +++++++++++++++++++++++++++++++ arch/x86/include/asm/tdx.h | 19 ------------ 3 files changed, 51 insertions(+), 51 deletions(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 055300e08fb3..a9893f44288f 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -14,20 +14,6 @@ #include #include -/* TDX module Call Leaf IDs */ -#define TDX_GET_INFO 1 -#define TDX_GET_VEINFO 3 -#define TDX_GET_REPORT 4 -#define TDX_ACCEPT_PAGE 6 -#define TDX_WR 8 - -/* TDCS fields. To be used by TDG.VM.WR and TDG.VM.RD module calls */ -#define TDCS_NOTIFY_ENABLES 0x9100000000000010 - -/* TDX hypercall Leaf IDs */ -#define TDVMCALL_MAP_GPA 0x10001 -#define TDVMCALL_REPORT_FATAL_ERROR 0x10003 - /* MMIO direction */ #define EPT_READ 0 #define EPT_WRITE 1 @@ -51,24 +37,6 @@ #define TDREPORT_SUBTYPE_0 0 -/* - * Wrapper for standard use of __tdx_hypercall with no output aside from - * return code. - */ -static inline u64 _tdx_hypercall(u64 fn, u64 r12, u64 r13, u64 r14, u64 r15) -{ - struct tdx_hypercall_args args = { - .r10 = TDX_HYPERCALL_STANDARD, - .r11 = fn, - .r12 = r12, - .r13 = r13, - .r14 = r14, - .r15 = r15, - }; - - return __tdx_hypercall(&args, 0); -} - /* Called from __tdx_hypercall() for unrecoverable failure */ noinstr void __tdx_hypercall_failed(void) { diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h index 4a03993e0785..562b3f4cbde8 100644 --- a/arch/x86/include/asm/shared/tdx.h +++ b/arch/x86/include/asm/shared/tdx.h @@ -12,6 +12,20 @@ #define TDX_CPUID_LEAF_ID 0x21 #define TDX_IDENT "IntelTDX " +/* TDX module Call Leaf IDs */ +#define TDX_GET_INFO 1 +#define TDX_GET_VEINFO 3 +#define TDX_GET_REPORT 4 +#define TDX_ACCEPT_PAGE 6 +#define TDX_WR 8 + +/* TDCS fields. To be used by TDG.VM.WR and TDG.VM.RD module calls */ +#define TDCS_NOTIFY_ENABLES 0x9100000000000010 + +/* TDX hypercall Leaf IDs */ +#define TDVMCALL_MAP_GPA 0x10001 +#define TDVMCALL_REPORT_FATAL_ERROR 0x10003 + #ifndef __ASSEMBLY__ /* @@ -38,8 +52,45 @@ struct tdx_hypercall_args { /* Used to request services from the VMM */ u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags); +/* + * Wrapper for standard use of __tdx_hypercall with no output aside from + * return code. + */ +static inline u64 _tdx_hypercall(u64 fn, u64 r12, u64 r13, u64 r14, u64 r15) +{ + struct tdx_hypercall_args args = { + .r10 = TDX_HYPERCALL_STANDARD, + .r11 = fn, + .r12 = r12, + .r13 = r13, + .r14 = r14, + .r15 = r15, + }; + + return __tdx_hypercall(&args, 0); +} + + /* Called from __tdx_hypercall() for unrecoverable failure */ void __tdx_hypercall_failed(void); +/* + * Used in __tdx_module_call() to gather the output registers' values of the + * TDCALL instruction when requesting services from the TDX module. This is a + * software only structure and not part of the TDX module/VMM ABI + */ +struct tdx_module_output { + u64 rcx; + u64 rdx; + u64 r8; + u64 r9; + u64 r10; + u64 r11; +}; + +/* Used to communicate with the TDX module */ +u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_SHARED_TDX_H */ diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 28d889c9aa16..234197ec17e4 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -20,21 +20,6 @@ #ifndef __ASSEMBLY__ -/* - * Used to gather the output registers values of the TDCALL and SEAMCALL - * instructions when requesting services from the TDX module. - * - * This is a software only structure and not part of the TDX module/VMM ABI. - */ -struct tdx_module_output { - u64 rcx; - u64 rdx; - u64 r8; - u64 r9; - u64 r10; - u64 r11; -}; - /* * Used by the #VE exception handler to gather the #VE exception * info from the TDX module. This is a software only structure @@ -55,10 +40,6 @@ struct ve_info { void __init tdx_early_init(void); -/* Used to communicate with the TDX module */ -u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, - struct tdx_module_output *out); - void tdx_get_ve_info(struct ve_info *ve); bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve); From patchwork Thu Mar 30 11:49:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77149 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1066812vqo; Thu, 30 Mar 2023 04:56:35 -0700 (PDT) X-Google-Smtp-Source: AKy350agHpZ5bQqtGASvoViC8YWGnXd7H49ngSLVj/LHSFTkV9rSk0gzQOy+yv+EcgrxTQM+x7IG X-Received: by 2002:a05:6a20:a89a:b0:d5:58df:fb7a with SMTP id ca26-20020a056a20a89a00b000d558dffb7amr6406888pzb.3.1680177395355; Thu, 30 Mar 2023 04:56:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177395; cv=none; d=google.com; s=arc-20160816; b=MiOp/1RKXyrIe89+GcBuPtydQTaUZmavUIw1ROxwZnkChjcWCN5W7cowliJZBCZPLU 4fx/7IfPuyghT65Tw2AAe6BJAypZ916fjpzOeDEHcdcgyExhs5X0k5fzd747UlKgkl+x LYlXmO5jK0wziDrbXXaoXuDgiN9rkjqyk9oshsii3NZAqIqzkfRF/bJt+l54mhHZHav3 j1JK7ecURvH/KSdm6sQWOGzKk9+s5E9HdtBHAalk1nOCDHduuSTQUkoYD+ZpKwDLISE4 9nPZ/YhUuq5grFgMxHK8nXi9bN1KLo7Q5CoRIou7sHqEn1NJcnoS3sFW5dJ7cbjrwpOr V9Lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4u4GBPVn1QeuU6neCfosEZSZ/2o4Hx9YEYk8G/HGEwU=; b=f0aw7ASUkMa9kwEsefnwTbrIoZe5hp266heLIIKdJOetT7Ef4/qC3cGZ6g/Q1f+5yg a5Akb2mYi20fwrG62TSp7yv4UH53siJ/07fKv4i4HcF6X0HqK76j/u38nmHjTMM1IH4P RDd8TNPwwhYf9c0Kq7jPHZBQfHSvuEAuyj+jIz7jCwBG/9K366uHdNb6BZ1KrQimtDaz D0NP1M2hj2r0c+UWcngTcM9pqab41AcEagtWMxNGr484nEZvkVE7FCwt76ZxOnko06do 9kImd1w5rGkdOgDrJXaY2JV2jsy0i+V6r9KOfYVoW3JobNd9W00qcYZvJztNHJCh5USj i/rg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="gixM1Vv/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w4-20020a656944000000b0051254ec023esi18692398pgq.44.2023.03.30.04.56.21; Thu, 30 Mar 2023 04:56:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="gixM1Vv/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231898AbjC3Luy (ORCPT + 99 others); Thu, 30 Mar 2023 07:50:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231182AbjC3Lub (ORCPT ); Thu, 30 Mar 2023 07:50:31 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2924AD25; Thu, 30 Mar 2023 04:50:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177028; x=1711713028; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MmsAbUP+stG7qEWYYDNAAzwlLkIUGBDyKypaxDlWzw4=; b=gixM1Vv/IWcSlPvQWBV6n8Uxde6NK5DZ8oMoyDzCXsc/h68FHi+JKgVP rFQ78xSh5JFWf9icQ/y7jhGe90REKnG84CxoYQAcK7FYUTUNnoR3T5rO6 f7v3JKq9R6jWr7mehIYVCZHNY1obCxBoG02QH52RRTSbU2ljPgeldJu3P cS5Q/0FrMqM3ns6bim24f8sKTN2cVUbaa33t6Criy+Mnr6REBUMzNIQUV HdYm/EhjAUWRrqOertVPR9sJY3L9PxpL/mokhu0pabfz8k6tTxL6IJQsN dPJr69GmO4EYw76iGCiaBQHclc/u88Y/439lj2B8/I07nkxc1ojo3LoRA Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="339868533" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="339868533" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="1014401458" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="1014401458" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:18 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 75D41104D1D; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv9 13/14] x86/tdx: Refactor try_accept_one() Date: Thu, 30 Mar 2023 14:49:55 +0300 Message-Id: <20230330114956.20342-14-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761793692883889523?= X-GMAIL-MSGID: =?utf-8?q?1761793692883889523?= Rework try_accept_one() to return accepted size instead of modifying 'start' inside the helper. It makes 'start' in-only argument and streamlines code on the caller side. Signed-off-by: Kirill A. Shutemov Suggested-by: Borislav Petkov Reviewed-by: Dave Hansen --- arch/x86/coco/tdx/tdx.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index a9893f44288f..9e6557d7514c 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -713,18 +713,18 @@ static bool tdx_cache_flush_required(void) return true; } -static bool try_accept_one(phys_addr_t *start, unsigned long len, - enum pg_level pg_level) +static unsigned long try_accept_one(phys_addr_t start, unsigned long len, + enum pg_level pg_level) { unsigned long accept_size = page_level_size(pg_level); u64 tdcall_rcx; u8 page_size; - if (!IS_ALIGNED(*start, accept_size)) - return false; + if (!IS_ALIGNED(start, accept_size)) + return 0; if (len < accept_size) - return false; + return 0; /* * Pass the page physical address to the TDX module to accept the @@ -743,15 +743,14 @@ static bool try_accept_one(phys_addr_t *start, unsigned long len, page_size = 2; break; default: - return false; + return 0; } - tdcall_rcx = *start | page_size; + tdcall_rcx = start | page_size; if (__tdx_module_call(TDX_ACCEPT_PAGE, tdcall_rcx, 0, 0, 0, NULL)) - return false; + return 0; - *start += accept_size; - return true; + return accept_size; } /* @@ -788,21 +787,22 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) */ while (start < end) { unsigned long len = end - start; + unsigned long accept_size; /* * Try larger accepts first. It gives chance to VMM to keep - * 1G/2M SEPT entries where possible and speeds up process by - * cutting number of hypercalls (if successful). + * 1G/2M Secure EPT entries where possible and speeds up + * process by cutting number of hypercalls (if successful). */ - if (try_accept_one(&start, len, PG_LEVEL_1G)) - continue; - - if (try_accept_one(&start, len, PG_LEVEL_2M)) - continue; - - if (!try_accept_one(&start, len, PG_LEVEL_4K)) + accept_size = try_accept_one(start, len, PG_LEVEL_1G); + if (!accept_size) + accept_size = try_accept_one(start, len, PG_LEVEL_2M); + if (!accept_size) + accept_size = try_accept_one(start, len, PG_LEVEL_4K); + if (!accept_size) return false; + start += accept_size; } return true; From patchwork Thu Mar 30 11:49:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 77158 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1069631vqo; Thu, 30 Mar 2023 05:01:33 -0700 (PDT) X-Google-Smtp-Source: AKy350aFSFQ8quSx2ayk26E2JUaLdhykqCSfvjsw6MvFKpKSkeikFxHtbqpDrShrs1B/Qif3ZDqt X-Received: by 2002:a17:903:22d2:b0:19a:7217:32a9 with SMTP id y18-20020a17090322d200b0019a721732a9mr6651544plg.26.1680177692725; Thu, 30 Mar 2023 05:01:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680177692; cv=none; d=google.com; s=arc-20160816; b=iNs04LE5YcnZ2lSmg35N/0nBYjZSEiwLOhrDYVS4E3v40pPC5dwq6jAtK4+UemD3du tngyeHqxxD0yxSY1aTY0iKXwHSbHAnfp80KWBEQ7+md9Fk+GdlQ7RdwtI0N5sPUqcnHW hjlPZYC0XN32XAsjLnWNCZHEI5s6Np68vBuRF7VurMnPuR97SfvVkCuRKo9M4qpNchX2 6CV+WYj87e5bjYHVuffKcF4VAnnGYHt8Dbrsq+XkPSQ5pgrn7CCMSOzR9WpOYiE2TBOl vy5f9FbQtKh1pKtqT7wNI4Dhu2QVxIoJUFg5x90iFSpqyHWB8kEMhnUp89gybEDmXnJ9 2FZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MF0qRcUciRiNtFXFbwuMYnO6py7KMxwUzVqLVroA+VI=; b=JDYeLeyiCqOE6rIQikUXje6kP+GIuaKxvqFgaPt8YBfVBKjNicGznecV/TNIQLN21v cpjERRIyOGo7aiSg/CVKbDF45uDhxJo/WWmdRgoWC6mJTT+u4greWeD2iYq1aYWKQZEM pBpA5599Oi3vqAHjMaX1P8DyBREmJipEYZodjWA3r1dKlO2GGqWy278Z2lPm+D6cpv+B hY+YivKVacdNI3DTOUwiUonEsVKXNiIEWKO6tmc9alRKJY0/GDkBCy8I69U66liESoMF hPm9YNGyuiNCbYPNYwxIhPlItA1JX9Iy/vKmrGV6if2ALI8WT2fSKHNReF1hHNtBBj5E KIrg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=h00903An; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jw12-20020a170903278c00b0019f359c651esi33076994plb.556.2023.03.30.05.01.07; Thu, 30 Mar 2023 05:01:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=h00903An; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231460AbjC3Lvn (ORCPT + 99 others); Thu, 30 Mar 2023 07:51:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231221AbjC3LvV (ORCPT ); Thu, 30 Mar 2023 07:51:21 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27E38AF27; Thu, 30 Mar 2023 04:50:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680177045; x=1711713045; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xeABudaJnL+acO+dda0KFPliSd+zujBisTvxupxsoaI=; b=h00903An2vdimzfAQUXgMPHeTc9oC/uUr0eBZnMvLJSw9EbesJDxMTCJ IHft2aXCwKsZfOqY8kM60byS+ZYCKQLDrT0Myurlp+Dydg/RniSxkxvxE P6fKACqjEcq20+XWFWJf7lZyTLuNP2B678jkKPnGxwc6YJaFa7ynzynrP 2+hFjnBZBUai+KSCLGUJB0RG+nhMt8LLoPz4KvUV1optfxb4yMIsdo/F7 M3sMdAYRfy41/cdui5WwlMmlUvczzL8dw+9ZBG8kOOtU5iDbHhixGr4iU F8g8weuaq1IYjcPZhAHV1f2TDP2ZCrygzMtenu22bRbyN12l5DvlOEDTH A==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="342756822" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="342756822" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="634856512" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="634856512" Received: from ngreburx-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.91]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2023 04:50:18 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 7FDFF1095DF; Thu, 30 Mar 2023 14:50:00 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv9 14/14] x86/tdx: Add unaccepted memory support Date: Thu, 30 Mar 2023 14:49:56 +0300 Message-Id: <20230330114956.20342-15-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> References: <20230330114956.20342-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761794004293007883?= X-GMAIL-MSGID: =?utf-8?q?1761794004293007883?= Hookup TDX-specific code to accept memory. Accepting the memory is the same process as converting memory from shared to private: kernel notifies VMM with MAP_GPA hypercall and then accept pages with ACCEPT_PAGE module call. The implementation in core kernel uses tdx_enc_status_changed(). It already used for converting memory to shared and back for I/O transactions. Boot stub provides own implementation of tdx_accept_memory(). It is similar in structure to tdx_enc_status_changed(), but only cares about converting memory to private. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 2 + arch/x86/boot/compressed/Makefile | 2 +- arch/x86/boot/compressed/error.c | 19 ++++++ arch/x86/boot/compressed/error.h | 1 + arch/x86/boot/compressed/mem.c | 33 +++++++++- arch/x86/boot/compressed/tdx-shared.c | 2 + arch/x86/boot/compressed/tdx.c | 39 +++++++++++ arch/x86/coco/tdx/Makefile | 2 +- arch/x86/coco/tdx/tdx-shared.c | 95 +++++++++++++++++++++++++++ arch/x86/coco/tdx/tdx.c | 86 +----------------------- arch/x86/include/asm/shared/tdx.h | 2 + arch/x86/include/asm/tdx.h | 2 + arch/x86/mm/unaccepted_memory.c | 9 ++- 13 files changed, 206 insertions(+), 88 deletions(-) create mode 100644 arch/x86/boot/compressed/tdx-shared.c create mode 100644 arch/x86/coco/tdx/tdx-shared.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index df21fba77db1..448cd869f0bd 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -884,9 +884,11 @@ config INTEL_TDX_GUEST bool "Intel TDX (Trust Domain Extensions) - Guest Support" depends on X86_64 && CPU_SUP_INTEL depends on X86_X2APIC + depends on EFI_STUB select ARCH_HAS_CC_PLATFORM select X86_MEM_ENCRYPT select X86_MCE + select UNACCEPTED_MEMORY help Support running as a guest under Intel TDX. Without this support, the guest kernel can not boot or run under TDX. diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 74f7adee46ad..71d9f71c13eb 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -106,7 +106,7 @@ ifdef CONFIG_X86_64 endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o -vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o +vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdx-shared.o $(obj)/tdcall.o vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/find.o $(obj)/mem.o vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o diff --git a/arch/x86/boot/compressed/error.c b/arch/x86/boot/compressed/error.c index c881878e56d3..5313c5cb2b80 100644 --- a/arch/x86/boot/compressed/error.c +++ b/arch/x86/boot/compressed/error.c @@ -22,3 +22,22 @@ void error(char *m) while (1) asm("hlt"); } + +/* EFI libstub provides vsnprintf() */ +#ifdef CONFIG_EFI_STUB +void panic(const char *fmt, ...) +{ + static char buf[1024]; + va_list args; + int len; + + va_start(args, fmt); + len = vsnprintf(buf, sizeof(buf), fmt, args); + va_end(args); + + if (len && buf[len - 1] == '\n') + buf[len - 1] = '\0'; + + error(buf); +} +#endif diff --git a/arch/x86/boot/compressed/error.h b/arch/x86/boot/compressed/error.h index 1de5821184f1..86fe33b93715 100644 --- a/arch/x86/boot/compressed/error.h +++ b/arch/x86/boot/compressed/error.h @@ -6,5 +6,6 @@ void warn(char *m); void error(char *m) __noreturn; +void panic(const char *fmt, ...) __noreturn __cold; #endif /* BOOT_COMPRESSED_ERROR_H */ diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c index de858a5180b6..e6b92e822ddd 100644 --- a/arch/x86/boot/compressed/mem.c +++ b/arch/x86/boot/compressed/mem.c @@ -5,6 +5,8 @@ #include "error.h" #include "find.h" #include "math.h" +#include "tdx.h" +#include #define PMD_SHIFT 21 #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) @@ -12,10 +14,39 @@ extern struct boot_params *boot_params; +/* + * accept_memory() and process_unaccepted_memory() called from EFI stub which + * runs before decompresser and its early_tdx_detect(). + * + * Enumerate TDX directly from the early users. + */ +static bool early_is_tdx_guest(void) +{ + static bool once; + static bool is_tdx; + + if (!IS_ENABLED(CONFIG_INTEL_TDX_GUEST)) + return false; + + if (!once) { + u32 eax, sig[3]; + + cpuid_count(TDX_CPUID_LEAF_ID, 0, &eax, + &sig[0], &sig[2], &sig[1]); + is_tdx = !memcmp(TDX_IDENT, sig, sizeof(sig)); + once = true; + } + + return is_tdx; +} + static inline void __accept_memory(phys_addr_t start, phys_addr_t end) { /* Platform-specific memory-acceptance call goes here */ - error("Cannot accept memory"); + if (early_is_tdx_guest()) + tdx_accept_memory(start, end); + else + error("Cannot accept memory: unknown platform\n"); } /* diff --git a/arch/x86/boot/compressed/tdx-shared.c b/arch/x86/boot/compressed/tdx-shared.c new file mode 100644 index 000000000000..5ac43762fe13 --- /dev/null +++ b/arch/x86/boot/compressed/tdx-shared.c @@ -0,0 +1,2 @@ +#include "error.h" +#include "../../coco/tdx/tdx-shared.c" diff --git a/arch/x86/boot/compressed/tdx.c b/arch/x86/boot/compressed/tdx.c index 918a7606f53c..de1d4a87418d 100644 --- a/arch/x86/boot/compressed/tdx.c +++ b/arch/x86/boot/compressed/tdx.c @@ -3,12 +3,17 @@ #include "../cpuflags.h" #include "../string.h" #include "../io.h" +#include "align.h" #include "error.h" +#include "pgtable_types.h" #include #include #include +#include + +static u64 cc_mask; /* Called from __tdx_hypercall() for unrecoverable failure */ void __tdx_hypercall_failed(void) @@ -16,6 +21,38 @@ void __tdx_hypercall_failed(void) error("TDVMCALL failed. TDX module bug?"); } +static u64 get_cc_mask(void) +{ + struct tdx_module_output out; + unsigned int gpa_width; + + /* + * TDINFO TDX module call is used to get the TD execution environment + * information like GPA width, number of available vcpus, debug mode + * information, etc. More details about the ABI can be found in TDX + * Guest-Host-Communication Interface (GHCI), section 2.4.2 TDCALL + * [TDG.VP.INFO]. + * + * The GPA width that comes out of this call is critical. TDX guests + * can not meaningfully run without it. + */ + if (__tdx_module_call(TDX_GET_INFO, 0, 0, 0, 0, &out)) + error("TDCALL GET_INFO failed (Buggy TDX module!)\n"); + + gpa_width = out.rcx & GENMASK(5, 0); + + /* + * The highest bit of a guest physical address is the "sharing" bit. + * Set it for shared pages and clear it for private pages. + */ + return BIT_ULL(gpa_width - 1); +} + +u64 cc_mkdec(u64 val) +{ + return val & ~cc_mask; +} + static inline unsigned int tdx_io_in(int size, u16 port) { struct tdx_hypercall_args args = { @@ -70,6 +107,8 @@ void early_tdx_detect(void) if (memcmp(TDX_IDENT, sig, sizeof(sig))) return; + cc_mask = get_cc_mask(); + /* Use hypercalls instead of I/O instructions */ pio_ops.f_inb = tdx_inb; pio_ops.f_outb = tdx_outb; diff --git a/arch/x86/coco/tdx/Makefile b/arch/x86/coco/tdx/Makefile index 46c55998557d..2c7dcbf1458b 100644 --- a/arch/x86/coco/tdx/Makefile +++ b/arch/x86/coco/tdx/Makefile @@ -1,3 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 -obj-y += tdx.o tdcall.o +obj-y += tdx.o tdx-shared.o tdcall.o diff --git a/arch/x86/coco/tdx/tdx-shared.c b/arch/x86/coco/tdx/tdx-shared.c new file mode 100644 index 000000000000..ee74f7bbe806 --- /dev/null +++ b/arch/x86/coco/tdx/tdx-shared.c @@ -0,0 +1,95 @@ +#include +#include + +static unsigned long try_accept_one(phys_addr_t start, unsigned long len, + enum pg_level pg_level) +{ + unsigned long accept_size = page_level_size(pg_level); + u64 tdcall_rcx; + u8 page_size; + + if (!IS_ALIGNED(start, accept_size)) + return 0; + + if (len < accept_size) + return 0; + + /* + * Pass the page physical address to the TDX module to accept the + * pending, private page. + * + * Bits 2:0 of RCX encode page size: 0 - 4K, 1 - 2M, 2 - 1G. + */ + switch (pg_level) { + case PG_LEVEL_4K: + page_size = 0; + break; + case PG_LEVEL_2M: + page_size = 1; + break; + case PG_LEVEL_1G: + page_size = 2; + break; + default: + return 0; + } + + tdcall_rcx = start | page_size; + if (__tdx_module_call(TDX_ACCEPT_PAGE, tdcall_rcx, 0, 0, 0, NULL)) + return 0; + + return accept_size; +} + +bool tdx_enc_status_changed_phys(phys_addr_t start, phys_addr_t end, bool enc) +{ + if (!enc) { + /* Set the shared (decrypted) bits: */ + start |= cc_mkdec(0); + end |= cc_mkdec(0); + } + + /* + * Notify the VMM about page mapping conversion. More info about ABI + * can be found in TDX Guest-Host-Communication Interface (GHCI), + * section "TDG.VP.VMCALL" + */ + if (_tdx_hypercall(TDVMCALL_MAP_GPA, start, end - start, 0, 0)) + return false; + + /* private->shared conversion requires only MapGPA call */ + if (!enc) + return true; + + /* + * For shared->private conversion, accept the page using + * TDX_ACCEPT_PAGE TDX module call. + */ + while (start < end) { + unsigned long len = end - start; + unsigned long accept_size; + + /* + * Try larger accepts first. It gives chance to VMM to keep + * 1G/2M Secure EPT entries where possible and speeds up + * process by cutting number of hypercalls (if successful). + */ + + accept_size = try_accept_one(start, len, PG_LEVEL_1G); + if (!accept_size) + accept_size = try_accept_one(start, len, PG_LEVEL_2M); + if (!accept_size) + accept_size = try_accept_one(start, len, PG_LEVEL_4K); + if (!accept_size) + return false; + start += accept_size; + } + + return true; +} + +void tdx_accept_memory(phys_addr_t start, phys_addr_t end) +{ + if (!tdx_enc_status_changed_phys(start, end, true)) + panic("Accepting memory failed: %#llx-%#llx\n", start, end); +} diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 9e6557d7514c..1392ebc3b406 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -713,46 +713,6 @@ static bool tdx_cache_flush_required(void) return true; } -static unsigned long try_accept_one(phys_addr_t start, unsigned long len, - enum pg_level pg_level) -{ - unsigned long accept_size = page_level_size(pg_level); - u64 tdcall_rcx; - u8 page_size; - - if (!IS_ALIGNED(start, accept_size)) - return 0; - - if (len < accept_size) - return 0; - - /* - * Pass the page physical address to the TDX module to accept the - * pending, private page. - * - * Bits 2:0 of RCX encode page size: 0 - 4K, 1 - 2M, 2 - 1G. - */ - switch (pg_level) { - case PG_LEVEL_4K: - page_size = 0; - break; - case PG_LEVEL_2M: - page_size = 1; - break; - case PG_LEVEL_1G: - page_size = 2; - break; - default: - return 0; - } - - tdcall_rcx = start | page_size; - if (__tdx_module_call(TDX_ACCEPT_PAGE, tdcall_rcx, 0, 0, 0, NULL)) - return 0; - - return accept_size; -} - /* * Inform the VMM of the guest's intent for this physical page: shared with * the VMM or private to the guest. The VMM is expected to change its mapping @@ -761,51 +721,9 @@ static unsigned long try_accept_one(phys_addr_t start, unsigned long len, static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) { phys_addr_t start = __pa(vaddr); - phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE); - - if (!enc) { - /* Set the shared (decrypted) bits: */ - start |= cc_mkdec(0); - end |= cc_mkdec(0); - } - - /* - * Notify the VMM about page mapping conversion. More info about ABI - * can be found in TDX Guest-Host-Communication Interface (GHCI), - * section "TDG.VP.VMCALL" - */ - if (_tdx_hypercall(TDVMCALL_MAP_GPA, start, end - start, 0, 0)) - return false; - - /* private->shared conversion requires only MapGPA call */ - if (!enc) - return true; + phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE); - /* - * For shared->private conversion, accept the page using - * TDX_ACCEPT_PAGE TDX module call. - */ - while (start < end) { - unsigned long len = end - start; - unsigned long accept_size; - - /* - * Try larger accepts first. It gives chance to VMM to keep - * 1G/2M Secure EPT entries where possible and speeds up - * process by cutting number of hypercalls (if successful). - */ - - accept_size = try_accept_one(start, len, PG_LEVEL_1G); - if (!accept_size) - accept_size = try_accept_one(start, len, PG_LEVEL_2M); - if (!accept_size) - accept_size = try_accept_one(start, len, PG_LEVEL_4K); - if (!accept_size) - return false; - start += accept_size; - } - - return true; + return tdx_enc_status_changed_phys(start, end, enc); } void __init tdx_early_init(void) diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h index 562b3f4cbde8..3afbba545a0d 100644 --- a/arch/x86/include/asm/shared/tdx.h +++ b/arch/x86/include/asm/shared/tdx.h @@ -92,5 +92,7 @@ struct tdx_module_output { u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, struct tdx_module_output *out); +void tdx_accept_memory(phys_addr_t start, phys_addr_t end); + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_SHARED_TDX_H */ diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 234197ec17e4..3a7340ad9a4b 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -50,6 +50,8 @@ bool tdx_early_handle_ve(struct pt_regs *regs); int tdx_mcall_get_report0(u8 *reportdata, u8 *tdreport); +bool tdx_enc_status_changed_phys(phys_addr_t start, phys_addr_t end, bool enc); + #else static inline void tdx_early_init(void) { }; diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index a0a58486eb74..a521f8c0987d 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -6,6 +6,7 @@ #include #include +#include #include /* Protects unaccepted memory bitmap */ @@ -61,7 +62,13 @@ void accept_memory(phys_addr_t start, phys_addr_t end) unsigned long len = range_end - range_start; /* Platform-specific memory-acceptance call goes here */ - panic("Cannot accept memory: unknown platform\n"); + if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { + tdx_accept_memory(range_start * PMD_SIZE, + range_end * PMD_SIZE); + } else { + panic("Cannot accept memory: unknown platform\n"); + } + bitmap_clear(bitmap, range_start, len); } spin_unlock_irqrestore(&unaccepted_memory_lock, flags);