From patchwork Wed Oct 11 08:30:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tip-bot2 for Thomas Gleixner X-Patchwork-Id: 151199 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp383936vqb; Wed, 11 Oct 2023 01:31:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHIaqu5gMCvf8BBDEj5TOHlwTkw6fNNwLiuF9BOFB8WNdvoOCGGfE3om3jgFVuX9FdOgnqb X-Received: by 2002:a05:6359:639d:b0:14d:2d2a:97f9 with SMTP id sg29-20020a056359639d00b0014d2d2a97f9mr16131720rwb.1.1697013082800; Wed, 11 Oct 2023 01:31:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697013082; cv=none; d=google.com; s=arc-20160816; b=HLYeU+Rvc5z7QWkrMWZ2OJR2p41Jb5VDdAzMPEy6+gd9/stKjZQVDE68RDZjNEhfE2 V01B1QwYNldySGnUagPALM6lbK09Cbi0ufZ59oZy7kgLz/YVuZNl3U61sYnwcYOCDSG+ L/I72IK5gd9tWW5C59VMWIcCsPD77beFC1+R85krRGVYN5EvGMVIDAs5fcj/Nn/zhhjw MRjBW0spUDeKprgsCAod3Pvq5u41vxhnx8dA1FnqBxCxW9/afsk6R3MlYtOvEHtXnBtn nkSqKdAwbktNgst5sSWQDn+8XFapk10V4lGFPl1TYKiHuaJEOAB412lj4Su/nrCiYKyr F4xA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:robot-unsubscribe :robot-id:message-id:mime-version:references:in-reply-to:cc:subject :to:reply-to:sender:from:dkim-signature:dkim-signature:date; bh=FSPBuAm80ZxoUwzuH9kINz5WfE0OV6AFLUrvEvTJ2c8=; fh=HqNl6IGLVbyrglu/3wem66k4E42ZmpURttqztWZmOPs=; b=dOAhERYADknh4xb0vjhfE2NaPEQFz2TSnYxkgjPbLMUTArxP6cpFVp6yU8QnWvgQoq KhbyqjmXV+4+c2A27Ca9/wZv/UOhITaVfgSSBnoBOwxn5uOIfAcSG1YfbkOlnRmGignZ +D4MDDNcTa0zy8+SrE2MVCNgTeCxfDtNiJz7bII6tZXPJyRnegFd7PGcJq5iYWoHYnCI QulOX1NA0CeJ+p2vRHNhUz9AxiwOhLPzrxZKeM7pbVt1COPs+IF+LJCk4jWU35N4w/2C BO6ISOwMEaW+XYZZ7GcLMy8UIabrzm+Glh/Hv/33PFd5ZvQ6iv9aLqUr/F3MlJ28Y67l VQ5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=P11B6y3t; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=F6TdGmDl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id m7-20020a63ed47000000b0057c3aaec1c8si13543172pgk.52.2023.10.11.01.31.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 01:31:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=P11B6y3t; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=F6TdGmDl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id EB34E80608D9; Wed, 11 Oct 2023 01:31:02 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345391AbjJKIaU (ORCPT + 19 others); Wed, 11 Oct 2023 04:30:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345300AbjJKIaQ (ORCPT ); Wed, 11 Oct 2023 04:30:16 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 514CEA4; Wed, 11 Oct 2023 01:30:13 -0700 (PDT) Date: Wed, 11 Oct 2023 08:30:11 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1697013012; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FSPBuAm80ZxoUwzuH9kINz5WfE0OV6AFLUrvEvTJ2c8=; b=P11B6y3tYJhmTerCnLZmv2BP8WSnNFbrBTWxENza2fXI13zscUH174BqYVJKxaN71OQ5Ir vdFkRSTqXgbjObq7M0HBpodbtCeEF/4DjADp17C4vEp+bHU1TyPinW8zp7tE2UWDN7NtY3 9mgGTmsiljwqTHpvaLy5UDmAxaWwqvmJN6nzV6zm+HRHmFKIYjAjEiBv0B0XzNZHn72dqw a8QCSOqYZfIVD8UrrHiPmVmZdv8WD5mPkpMnpptXrEqVhRyp4TCL6EZbsf0JrihEUhRLlb F7CrbaJWUoZR/n8FvdO5cS1ETBAULmfQc8vB2OAjDZpSLcKE5q1YOsoopnp20w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1697013012; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FSPBuAm80ZxoUwzuH9kINz5WfE0OV6AFLUrvEvTJ2c8=; b=F6TdGmDl9cgQ9O0gROg55QFDyPdHeLeMcxPz/jKXIgpSPuDLbthVuqTeczU9PGkogOYYXn syayGUjHe6u5uNCg== From: "tip-bot2 for Alexander Shishkin" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/mm] x86/sev: Move sev_setup_arch() to mem_encrypt.c Cc: Alexander Shishkin , Ingo Molnar , Tom Lendacky , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20231010145220.3960055-2-alexander.shishkin@linux.intel.com> References: <20231010145220.3960055-2-alexander.shishkin@linux.intel.com> MIME-Version: 1.0 Message-ID: <169701301155.3135.15738501380783422700.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails X-Spam-Status: No, score=2.7 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Wed, 11 Oct 2023 01:31:03 -0700 (PDT) X-Spam-Level: ** X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779380621294666920 X-GMAIL-MSGID: 1779447190463049184 The following commit has been merged into the x86/mm branch of tip: Commit-ID: 6e74b125155dc8c747d76fb45d8e6d20e9e4fb4d Gitweb: https://git.kernel.org/tip/6e74b125155dc8c747d76fb45d8e6d20e9e4fb4d Author: Alexander Shishkin AuthorDate: Tue, 10 Oct 2023 17:52:19 +03:00 Committer: Ingo Molnar CommitterDate: Wed, 11 Oct 2023 10:15:47 +02:00 x86/sev: Move sev_setup_arch() to mem_encrypt.c Since commit: 4d96f9109109b ("x86/sev: Replace occurrences of sev_active() with cc_platform_has()") ... the SWIOTLB bounce buffer size adjustment and restricted virtio memory setting also inadvertently apply to TDX: the code is using cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) as a gatekeeping condition, which is also true for TDX, and this is also what we want. To reflect this, move the corresponding code to generic mem_encrypt.c. No functional changes intended. Signed-off-by: Alexander Shishkin Signed-off-by: Ingo Molnar Reviewed-by: Tom Lendacky Link: https://lore.kernel.org/r/20231010145220.3960055-2-alexander.shishkin@linux.intel.com --- arch/x86/include/asm/mem_encrypt.h | 4 +-- arch/x86/kernel/setup.c | 2 +- arch/x86/mm/mem_encrypt.c | 34 ++++++++++++++++++++++++++++- arch/x86/mm/mem_encrypt_amd.c | 35 +----------------------------- 4 files changed, 37 insertions(+), 38 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 473b16d..359ada4 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -19,8 +19,10 @@ #ifdef CONFIG_X86_MEM_ENCRYPT void __init mem_encrypt_init(void); +void __init mem_encrypt_setup_arch(void); #else static inline void mem_encrypt_init(void) { } +static inline void __init mem_encrypt_setup_arch(void) { } #endif #ifdef CONFIG_AMD_MEM_ENCRYPT @@ -43,7 +45,6 @@ void __init sme_map_bootdata(char *real_mode_data); void __init sme_unmap_bootdata(char *real_mode_data); void __init sme_early_init(void); -void __init sev_setup_arch(void); void __init sme_encrypt_kernel(struct boot_params *bp); void __init sme_enable(struct boot_params *bp); @@ -73,7 +74,6 @@ static inline void __init sme_map_bootdata(char *real_mode_data) { } static inline void __init sme_unmap_bootdata(char *real_mode_data) { } static inline void __init sme_early_init(void) { } -static inline void __init sev_setup_arch(void) { } static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { } diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index b9145a6..ec44dc5 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1124,7 +1124,7 @@ void __init setup_arch(char **cmdline_p) * Needs to run after memblock setup because it needs the physical * memory size. */ - sev_setup_arch(); + mem_encrypt_setup_arch(); efi_fake_memmap(); efi_find_mirror(); diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 9f27e14..c290c55 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -12,6 +12,7 @@ #include #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) @@ -86,3 +87,36 @@ void __init mem_encrypt_init(void) print_mem_encrypt_feature_info(); } + +void __init mem_encrypt_setup_arch(void) +{ + phys_addr_t total_mem = memblock_phys_mem_size(); + unsigned long size; + + if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) + return; + + /* + * For SEV and TDX, all DMA has to occur via shared/unencrypted pages. + * Kernel uses SWIOTLB to make this happen without changing device + * drivers. However, depending on the workload being run, the + * default 64MB of SWIOTLB may not be enough and SWIOTLB may + * run out of buffers for DMA, resulting in I/O errors and/or + * performance degradation especially with high I/O workloads. + * + * Adjust the default size of SWIOTLB using a percentage of guest + * memory for SWIOTLB buffers. Also, as the SWIOTLB bounce buffer + * memory is allocated from low memory, ensure that the adjusted size + * is within the limits of low available memory. + * + * The percentage of guest memory used here for SWIOTLB buffers + * is more of an approximation of the static adjustment which + * 64MB for <1G, and ~128M to 256M for 1G-to-4G, i.e., the 6% + */ + size = total_mem * 6 / 100; + size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G); + swiotlb_adjust_size(size); + + /* Set restricted memory access for virtio. */ + virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc); +} diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index 6faea41..62dde75 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include @@ -215,40 +214,6 @@ void __init sme_map_bootdata(char *real_mode_data) __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); } -void __init sev_setup_arch(void) -{ - phys_addr_t total_mem = memblock_phys_mem_size(); - unsigned long size; - - if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) - return; - - /* - * For SEV, all DMA has to occur via shared/unencrypted pages. - * SEV uses SWIOTLB to make this happen without changing device - * drivers. However, depending on the workload being run, the - * default 64MB of SWIOTLB may not be enough and SWIOTLB may - * run out of buffers for DMA, resulting in I/O errors and/or - * performance degradation especially with high I/O workloads. - * - * Adjust the default size of SWIOTLB for SEV guests using - * a percentage of guest memory for SWIOTLB buffers. - * Also, as the SWIOTLB bounce buffer memory is allocated - * from low memory, ensure that the adjusted size is within - * the limits of low available memory. - * - * The percentage of guest memory used here for SWIOTLB buffers - * is more of an approximation of the static adjustment which - * 64MB for <1G, and ~128M to 256M for 1G-to-4G, i.e., the 6% - */ - size = total_mem * 6 / 100; - size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G); - swiotlb_adjust_size(size); - - /* Set restricted memory access for virtio. */ - virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc); -} - static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) { unsigned long pfn = 0;