Message ID | 20231005131402.14611-11-kirill.shutemov@linux.intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2016:b0:403:3b70:6f57 with SMTP id fe22csp401054vqb; Thu, 5 Oct 2023 09:04:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHOG7acV7gBMvSaDs+Uq+a7hqGJwCsgdI5fTc34KJlXW+z4YxrO8/RJmoJzxSuHHow2GRYj X-Received: by 2002:a17:90b:3843:b0:274:7b6a:4358 with SMTP id nl3-20020a17090b384300b002747b6a4358mr5354127pjb.6.1696521855257; Thu, 05 Oct 2023 09:04:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696521855; cv=none; d=google.com; s=arc-20160816; b=g7fdG98Ora1B1X297C2vYfmduKUk9x1wtDRlF1sGSgEra9I3ZjCis/qvC7QmXv6+J4 bDKjncSf0QHcv/lTDtFp2z5UWFdUzF2CqGdLXg0eszo6FJ4i3Q0Xzxfzbu6WJC9yFGUm oXwVp5r/I3D/ApLGe7y3tG75qaSzrxvaLUQX+gytPz14bAbW6AUYfWrix4GqPC35bIN7 1fMygvtqij53sjDO6XSAEtBlcuI5KMNWuHRiOCiyFrgBrsb21L42tv6iapoubMS6SWox 4kF6ES1Cfg1MkQg9dw2TiI+W4eim/VTfl9MarwyOiF2iKIIh29HXRq+d+67Z/1fSxnFK VLQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CGmfcDjf2h4wlLjEJ8BEcInbus4xMMzF/kCtIqpochY=; fh=F1gpqSS/HYttK+doOKuW4yrAifo5qykyq1MCI9SIQmQ=; b=dew2hYz68xYpZ3HETmaOo1FP56ZPPdorvG1piZ/r4Ifafj5qv6oGAJqQKZjRgWTZTX BbyKj52BjvvjMgB6CVkL1Z7Rm+hoyfpBAG3yvAMFoFln387S997eRuXv5xKinoKtmwJ4 MHZJrJcLHjsxIwVHOPtz1Ke5hcm0c9JuRXZv4wAybepyF2n9lIOrWiLzGSrMD4eW/ko6 FtdPCDPZsKvHqkN8JyRrSuggmRYNET/SiLu4x/RIRXUKoVG01+EU4SkANNaciQ0xTmb5 nh0msWyPGVE2VE5CqNBerpYU5RJ7xcHsJxCFAUm5dFhGSuX+L5Z1IECCSeMEHNvFLzpj jABw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aJCxXiWV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id lr17-20020a17090b4b9100b0026ce877b4cbsi4054016pjb.151.2023.10.05.09.04.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Oct 2023 09:04:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aJCxXiWV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 538F883B827D; Thu, 5 Oct 2023 09:03:50 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238648AbjJEP7K (ORCPT <rfc822;ezelljr.billy@gmail.com> + 19 others); Thu, 5 Oct 2023 11:59:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234733AbjJEP4v (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 5 Oct 2023 11:56:51 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 890CB6632 for <linux-kernel@vger.kernel.org>; Thu, 5 Oct 2023 07:04:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696514688; x=1728050688; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2nx+VfYqCoJBnge8h8wvBHlxzs9Z5VteiY85TeYVjzU=; b=aJCxXiWVzP04Sn6jemuvaucC3JJc36rwyzWb/uQJoI+LEO0CTFU0mvAj ZMzJtCHJz5LuAAJpmT9IOQEWtfAlaltYZ4QgwYMTVl9FjTNjr3RahKSEf WtpY7u4HHy+6uFRk/4rYc0bHGs9n3o7sm2Ga83YNaO+fpsTKa2wxLUcAU 5/tc5ZdxO0LKpPLv25y/Y5IQ4Y6Ci5Crkdk5rSKLi+DFLmZQvl/xIlAy2 XH+WpkavX8c7jiZvN05Xg6gb0QSzldqYs1Ni5lWIe4LrIjAGmHjr5OwuW NJ9ckevmHwLDyDQs9lUPSzaER9y7lOUzSZ3N429MnAJ6BRLNkVEUENA/H Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10854"; a="380767213" X-IronPort-AV: E=Sophos;i="6.03,203,1694761200"; d="scan'208";a="380767213" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Oct 2023 06:14:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10854"; a="728449276" X-IronPort-AV: E=Sophos;i="6.03,203,1694761200"; d="scan'208";a="728449276" Received: from skwasnia-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.222.71]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Oct 2023 06:14:23 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 7291D10A154; Thu, 5 Oct 2023 16:14:14 +0300 (+03) From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org Cc: "Rafael J. Wysocki" <rafael@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Adrian Hunter <adrian.hunter@intel.com>, Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>, Elena Reshetova <elena.reshetova@intel.com>, Jun Nakajima <jun.nakajima@intel.com>, Rick Edgecombe <rick.p.edgecombe@intel.com>, Tom Lendacky <thomas.lendacky@amd.com>, kexec@lists.infradead.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCH 10/13] x86/tdx: Convert shared memory back to private on kexec Date: Thu, 5 Oct 2023 16:13:59 +0300 Message-ID: <20231005131402.14611-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231005131402.14611-1-kirill.shutemov@linux.intel.com> References: <20231005131402.14611-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Thu, 05 Oct 2023 09:03:50 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778932101014786726 X-GMAIL-MSGID: 1778932101014786726 |
Series |
x86/tdx: Add kexec support
|
|
Commit Message
Kirill A. Shutemov
Oct. 5, 2023, 1:13 p.m. UTC
TDX guests allocate shared buffers to perform I/O. It is done by
allocating pages normally from the buddy allocator and converting them
to shared with set_memory_decrypted().
The target kernel has no idea what memory is converted this way. It only
sees E820_TYPE_RAM.
Accessing shared memory via private mapping is fatal. It leads to
unrecoverable TD exit.
On TD shutdown (also covers kexec), walk direct mapping and convert all
shared memory back to private. It makes all RAM private again and target
kernel may use it normally.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
arch/x86/Kconfig | 1 +
arch/x86/coco/tdx/kexec.c | 0
arch/x86/coco/tdx/tdx.c | 137 +++++++++++++++++++++++++++++++++++++-
3 files changed, 136 insertions(+), 2 deletions(-)
create mode 100644 arch/x86/coco/tdx/kexec.c
Comments
Hello Kirill, On 10/5/2023 8:13 AM, Kirill A. Shutemov wrote: > TDX guests allocate shared buffers to perform I/O. It is done by > allocating pages normally from the buddy allocator and converting them > to shared with set_memory_decrypted(). > > The target kernel has no idea what memory is converted this way. It only > sees E820_TYPE_RAM. > > Accessing shared memory via private mapping is fatal. It leads to > unrecoverable TD exit. > > On TD shutdown (also covers kexec), walk direct mapping and convert all > shared memory back to private. It makes all RAM private again and target > kernel may use it normally. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > arch/x86/Kconfig | 1 + > arch/x86/coco/tdx/kexec.c | 0 > arch/x86/coco/tdx/tdx.c | 137 +++++++++++++++++++++++++++++++++++++- > 3 files changed, 136 insertions(+), 2 deletions(-) > create mode 100644 arch/x86/coco/tdx/kexec.c > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 7368d254d01f..b5acf9fb4c70 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -884,6 +884,7 @@ config INTEL_TDX_GUEST > select X86_MEM_ENCRYPT > select X86_MCE > select UNACCEPTED_MEMORY > + select EMERGENCY_VIRT_CALLBACK > help > Support running as a guest under Intel TDX. Without this support, > the guest kernel can not boot or run under TDX. > diff --git a/arch/x86/coco/tdx/kexec.c b/arch/x86/coco/tdx/kexec.c > new file mode 100644 > index 000000000000..e69de29bb2d1 > diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c > index 56e152126f20..ac0745303983 100644 > --- a/arch/x86/coco/tdx/tdx.c > +++ b/arch/x86/coco/tdx/tdx.c > @@ -6,6 +6,7 @@ > > #include <linux/cpufeature.h> > #include <linux/debugfs.h> > +#include <linux/delay.h> > #include <linux/export.h> > #include <linux/io.h> > #include <asm/coco.h> > @@ -14,6 +15,8 @@ > #include <asm/insn.h> > #include <asm/insn-eval.h> > #include <asm/pgtable.h> > +#include <asm/reboot.h> > +#include <asm/set_memory.h> > > /* MMIO direction */ > #define EPT_READ 0 > @@ -40,6 +43,9 @@ > > static atomic_long_t nr_shared; > > +static atomic_t conversions_in_progress; > +static bool conversion_allowed = true; > + > static inline bool pte_decrypted(pte_t pte) > { > return cc_mkdec(pte_val(pte)) == pte_val(pte); > @@ -704,6 +710,14 @@ static bool tdx_tlb_flush_required(bool private) > > static bool tdx_cache_flush_required(void) > { > + /* > + * Avoid issuing CLFLUSH on set_memory_decrypted() if conversions > + * stopped. Otherwise it can race with unshare_all_memory() and trigger > + * implicit conversion to shared. > + */ > + if (!conversion_allowed) > + return false; > + > /* > * AMD SME/SEV can avoid cache flushing if HW enforces cache coherence. > * TDX doesn't have such capability. > @@ -787,12 +801,25 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) > static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages, > bool enc) > { > + atomic_inc(&conversions_in_progress); > + > + /* > + * Check after bumping conversions_in_progress to serialize > + * against tdx_shutdown(). > + */ > + if (!conversion_allowed) { > + atomic_dec(&conversions_in_progress); > + return -EBUSY; > + } > + > /* > * Only handle shared->private conversion here. > * See the comment in tdx_early_init(). > */ > - if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) > + if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) { > + atomic_dec(&conversions_in_progress); > return -EIO; > + } > > return 0; > } > @@ -804,17 +831,115 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages, > * Only handle private->shared conversion here. > * See the comment in tdx_early_init(). > */ > - if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) > + if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) { > + atomic_dec(&conversions_in_progress); > return -EIO; > + } > > if (enc) > atomic_long_sub(numpages, &nr_shared); > else > atomic_long_add(numpages, &nr_shared); > > + atomic_dec(&conversions_in_progress); > + > return 0; > } > > +static void unshare_all_memory(bool unmap) > +{ > + unsigned long addr, end; > + long found = 0, shared; > + > + /* > + * Walk direct mapping and convert all shared memory back to private, > + */ > + > + addr = PAGE_OFFSET; > + end = PAGE_OFFSET + get_max_mapped(); > + > + while (addr < end) { > + unsigned long size; > + unsigned int level; > + pte_t *pte; > + > + pte = lookup_address(addr, &level); IIRC, you were earlier walking the direct mapping using walk_page_range_novma(), any particular reason to use lookup_address() instead ? > + size = page_level_size(level); > + > + if (pte && pte_decrypted(*pte)) { Additionally need to add check for pte_none() here to handle physical memory holes in direct mapping. > + int pages = size / PAGE_SIZE; > + > + /* > + * Touching memory with shared bit set triggers implicit > + * conversion to shared. > + * > + * Make sure nobody touches the shared range from > + * now on. > + * > + * Bypass unmapping for crash scenario. Unmapping > + * requires sleepable context, but in crash case kernel > + * hits the code path with interrupts disabled. In case of SNP we will need to temporarily enable interrupts during this unsharing as we invoke set_memory_encrypted() which then hits a BUG_ON() in cpa_flush() if interrupts are disabled. Thanks, Ashish > + * It shouldn't be a problem as all secondary CPUs are > + * down and kernel runs with interrupts disabled, so > + * there is no room for race. > + */ > + if (unmap) > + set_memory_np(addr, pages); > + > + if (!tdx_enc_status_changed(addr, pages, true)) { > + pr_err("Failed to unshare range %#lx-%#lx\n", > + addr, addr + size); > + } > + > + found += pages; > + } > + > + addr += size; > + } > + > + shared = atomic_long_read(&nr_shared); > + if (shared != found) { > + pr_err("shared page accounting is off\n"); > + pr_err("nr_shared = %ld, nr_found = %ld\n", shared, found); > + } > +} > + > +static void tdx_shutdown(void) > +{ > + unsigned long timeout; > + > + /* > + * Stop new private<->shared conversions and wait for in-flight > + * conversions to complete. > + * > + * Do not wait more than 30 seconds. > + */ > + timeout = 30 * USEC_PER_SEC; > + conversion_allowed = false; > + while (atomic_read(&conversions_in_progress) && timeout--) > + udelay(1); > + > + if (!timeout) > + pr_warn("Failed to finish shared<->private conversions\n"); > + > + unshare_all_memory(true); > + > + native_machine_shutdown(); > +} > + > +static void tdx_crash_shutdown(void) > +{ > + /* > + * Crash can race with private<->shared conversion. > + * > + * There's no clean way out: report and proceed. > + */ > + if (atomic_read(&conversions_in_progress)) > + pr_warn("Failed to finish shared<->private conversions\n"); > + > + unshare_all_memory(false); > +} > + > void __init tdx_early_init(void) > { > struct tdx_module_args args = { > @@ -882,6 +1007,14 @@ void __init tdx_early_init(void) > */ > x86_cpuinit.parallel_bringup = false; > > + machine_ops.shutdown = tdx_shutdown; > + > + /* > + * KVM overrides machine_ops.crash_shutdown, use emergency > + * virt callback instead. > + */ > + cpu_emergency_register_virt_callback(tdx_crash_shutdown); > + > pr_info("Guest detected\n"); > } > >
On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote: > > +static void unshare_all_memory(bool unmap) > > +{ > > + unsigned long addr, end; > > + long found = 0, shared; > > + > > + /* > > + * Walk direct mapping and convert all shared memory back to private, > > + */ > > + > > + addr = PAGE_OFFSET; > > + end = PAGE_OFFSET + get_max_mapped(); > > + > > + while (addr < end) { > > + unsigned long size; > > + unsigned int level; > > + pte_t *pte; > > + > > + pte = lookup_address(addr, &level); > > IIRC, you were earlier walking the direct mapping using > walk_page_range_novma(), any particular reason to use lookup_address() > instead ? walk_page_range_novma() wants mmap lock to be taken, but it is tricky as we run here from atomic context in case of crash. I considered using trylock to bypass the limitation, but it is a hack. > > > + size = page_level_size(level); > > + > > + if (pte && pte_decrypted(*pte)) { > > Additionally need to add check for pte_none() here to handle physical memory > holes in direct mapping. lookup_address() returns NULL for none entries. > > + int pages = size / PAGE_SIZE; > > + > > + /* > > + * Touching memory with shared bit set triggers implicit > > + * conversion to shared. > > + * > > + * Make sure nobody touches the shared range from > > + * now on. > > + * > > + * Bypass unmapping for crash scenario. Unmapping > > + * requires sleepable context, but in crash case kernel > > + * hits the code path with interrupts disabled. > > In case of SNP we will need to temporarily enable interrupts during this > unsharing as we invoke set_memory_encrypted() which then hits a BUG_ON() in > cpa_flush() if interrupts are disabled. Do you really need full set_memory_encrypted()? Can't you do something ligher?
On 10/5/2023 4:28 PM, Kirill A. Shutemov wrote: > On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote: >>> +static void unshare_all_memory(bool unmap) >>> +{ >>> + unsigned long addr, end; >>> + long found = 0, shared; >>> + >>> + /* >>> + * Walk direct mapping and convert all shared memory back to private, >>> + */ >>> + >>> + addr = PAGE_OFFSET; >>> + end = PAGE_OFFSET + get_max_mapped(); >>> + >>> + while (addr < end) { >>> + unsigned long size; >>> + unsigned int level; >>> + pte_t *pte; >>> + >>> + pte = lookup_address(addr, &level); >> >> IIRC, you were earlier walking the direct mapping using >> walk_page_range_novma(), any particular reason to use lookup_address() >> instead ? > > walk_page_range_novma() wants mmap lock to be taken, but it is tricky as > we run here from atomic context in case of crash. > > I considered using trylock to bypass the limitation, but it is a hack. > >> >>> + size = page_level_size(level); >>> + >>> + if (pte && pte_decrypted(*pte)) { >> >> Additionally need to add check for pte_none() here to handle physical memory >> holes in direct mapping. > > lookup_address() returns NULL for none entries. > Looking at lookup_address_in_pgd(), at pte level it is simply returning pte_offset_kernel() and there does not seem to be a check for returning NULL if pte_none() ? >>> + int pages = size / PAGE_SIZE; >>> + >>> + /* >>> + * Touching memory with shared bit set triggers implicit >>> + * conversion to shared. >>> + * >>> + * Make sure nobody touches the shared range from >>> + * now on. >>> + * >>> + * Bypass unmapping for crash scenario. Unmapping >>> + * requires sleepable context, but in crash case kernel >>> + * hits the code path with interrupts disabled. >> >> In case of SNP we will need to temporarily enable interrupts during this >> unsharing as we invoke set_memory_encrypted() which then hits a BUG_ON() in >> cpa_flush() if interrupts are disabled. > > Do you really need full set_memory_encrypted()? Can't you do something > ligher? > We need to modify the PTE for setting c-bit to 1 so that will require cpa_flush(), though probably can add something lighter to do clflush_cache_range() directly ? Thanks, Ashish
On Thu, Oct 05, 2023 at 05:01:23PM -0500, Kalra, Ashish wrote: > On 10/5/2023 4:28 PM, Kirill A. Shutemov wrote: > > On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote: > > > > +static void unshare_all_memory(bool unmap) > > > > +{ > > > > + unsigned long addr, end; > > > > + long found = 0, shared; > > > > + > > > > + /* > > > > + * Walk direct mapping and convert all shared memory back to private, > > > > + */ > > > > + > > > > + addr = PAGE_OFFSET; > > > > + end = PAGE_OFFSET + get_max_mapped(); > > > > + > > > > + while (addr < end) { > > > > + unsigned long size; > > > > + unsigned int level; > > > > + pte_t *pte; > > > > + > > > > + pte = lookup_address(addr, &level); > > > > > > IIRC, you were earlier walking the direct mapping using > > > walk_page_range_novma(), any particular reason to use lookup_address() > > > instead ? > > > > walk_page_range_novma() wants mmap lock to be taken, but it is tricky as > > we run here from atomic context in case of crash. > > > > I considered using trylock to bypass the limitation, but it is a hack. > > > > > > > > > + size = page_level_size(level); > > > > + > > > > + if (pte && pte_decrypted(*pte)) { > > > > > > Additionally need to add check for pte_none() here to handle physical memory > > > holes in direct mapping. > > > > lookup_address() returns NULL for none entries. > > > > Looking at lookup_address_in_pgd(), at pte level it is simply returning > pte_offset_kernel() and there does not seem to be a check for returning NULL > if pte_none() ? Hm. You are right. I think it yet another quirk in how lookup_address() implemented. We need to make it straight too. There's two options: either make lookup_address() return pointer for entry even if it is NULL, or add check for pte_none() after pte_offset_kernel() and return NULL if it is true. I like the first option more as it allows caller to populate the entry if it wants. > > > > + int pages = size / PAGE_SIZE; > > > > + > > > > + /* > > > > + * Touching memory with shared bit set triggers implicit > > > > + * conversion to shared. > > > > + * > > > > + * Make sure nobody touches the shared range from > > > > + * now on. > > > > + * > > > > + * Bypass unmapping for crash scenario. Unmapping > > > > + * requires sleepable context, but in crash case kernel > > > > + * hits the code path with interrupts disabled. > > > > > > In case of SNP we will need to temporarily enable interrupts during this > > > unsharing as we invoke set_memory_encrypted() which then hits a BUG_ON() in > > > cpa_flush() if interrupts are disabled. > > > > Do you really need full set_memory_encrypted()? Can't you do something > > ligher? > > > We need to modify the PTE for setting c-bit to 1 so that will require > cpa_flush(), though probably can add something lighter to do > clflush_cache_range() directly ? For TDX, I don't touch shared bit as nobody suppose to touch the memory after the point (ans set_memory_np() enforces it for !crash case). Can't SNP do the same?
On Thu, Oct 05, 2023, Kirill A. Shutemov wrote: > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 7368d254d01f..b5acf9fb4c70 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -884,6 +884,7 @@ config INTEL_TDX_GUEST > select X86_MEM_ENCRYPT > select X86_MCE > select UNACCEPTED_MEMORY > + select EMERGENCY_VIRT_CALLBACK > help > Support running as a guest under Intel TDX. Without this support, > the guest kernel can not boot or run under TDX. ... > void __init tdx_early_init(void) > { > struct tdx_module_args args = { > @@ -882,6 +1007,14 @@ void __init tdx_early_init(void) > */ > x86_cpuinit.parallel_bringup = false; > > + machine_ops.shutdown = tdx_shutdown; > + > + /* > + * KVM overrides machine_ops.crash_shutdown, use emergency This is going to be super confusing. KVM utilizes the emergency virt callback. The KVM paravirt guest code uses .crash_shutdown(). People that are passingly familiar with virt and know what KVM is, but don't already know the difference between the two are going to be all kinds of confused. I also feel like you're playing with fire, e.g. what's to stop the hypervisor specific paravirt guest support from using .shutdown() in the future? And the callback is invoked for far more than just kexec(). I don't see how the host can emulate a reboot without destroying and rebuilding the VM, e.g. it can't stuff register state to emulate INIT or RESET. Unless I'm missing something, converting shared memory back to private for a shutdown or reboot is undesirable as adds one more thing that can go wrong and prevent the system from cleanly shutting down ASAP (for some definitions of "cleanly"). Lastly, doesn't SEV need similar behavior? This seems like core functionality for any guest with cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT). Why not make the "unshare on kexec" code common and gate it with CC_ATTR_GUEST_MEM_ENCRYPT?
On Fri, Oct 06, 2023 at 07:58:03AM -0700, Sean Christopherson wrote: > On Thu, Oct 05, 2023, Kirill A. Shutemov wrote: > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > > index 7368d254d01f..b5acf9fb4c70 100644 > > --- a/arch/x86/Kconfig > > +++ b/arch/x86/Kconfig > > @@ -884,6 +884,7 @@ config INTEL_TDX_GUEST > > select X86_MEM_ENCRYPT > > select X86_MCE > > select UNACCEPTED_MEMORY > > + select EMERGENCY_VIRT_CALLBACK > > help > > Support running as a guest under Intel TDX. Without this support, > > the guest kernel can not boot or run under TDX. > > ... > > > void __init tdx_early_init(void) > > { > > struct tdx_module_args args = { > > @@ -882,6 +1007,14 @@ void __init tdx_early_init(void) > > */ > > x86_cpuinit.parallel_bringup = false; > > > > + machine_ops.shutdown = tdx_shutdown; > > + > > + /* > > + * KVM overrides machine_ops.crash_shutdown, use emergency > > This is going to be super confusing. KVM utilizes the emergency virt callback. > The KVM paravirt guest code uses .crash_shutdown(). People that are passingly > familiar with virt and know what KVM is, but don't already know the difference > between the two are going to be all kinds of confused. > > I also feel like you're playing with fire, e.g. what's to stop the hypervisor > specific paravirt guest support from using .shutdown() in the future? > > And the callback is invoked for far more than just kexec(). I don't see how the > host can emulate a reboot without destroying and rebuilding the VM, e.g. it can't > stuff register state to emulate INIT or RESET. Unless I'm missing something, > converting shared memory back to private for a shutdown or reboot is undesirable > as adds one more thing that can go wrong and prevent the system from cleanly > shutting down ASAP (for some definitions of "cleanly"). Okay, fair enough. I will look for better way to hookup into kexec process. That was the best fit I found so far, but yes it is not ideal. > Lastly, doesn't SEV need similar behavior? This seems like core functionality > for any guest with cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT). Why not make the > "unshare on kexec" code common and gate it with CC_ATTR_GUEST_MEM_ENCRYPT? I don't know SEV specifics. I am open to collaboration on this. Tom, Ashish, let me know if you need this in generic code. I can arrange that.
On 10/5/2023 5:28 PM, Kirill A. Shutemov wrote: > On Thu, Oct 05, 2023 at 05:01:23PM -0500, Kalra, Ashish wrote: >> On 10/5/2023 4:28 PM, Kirill A. Shutemov wrote: >>> On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote: >>>>> +static void unshare_all_memory(bool unmap) >>>>> +{ >>>>> + unsigned long addr, end; >>>>> + long found = 0, shared; >>>>> + >>>>> + /* >>>>> + * Walk direct mapping and convert all shared memory back to private, >>>>> + */ >>>>> + >>>>> + addr = PAGE_OFFSET; >>>>> + end = PAGE_OFFSET + get_max_mapped(); >>>>> + >>>>> + while (addr < end) { >>>>> + unsigned long size; >>>>> + unsigned int level; >>>>> + pte_t *pte; >>>>> + >>>>> + pte = lookup_address(addr, &level); >>>> >>>> IIRC, you were earlier walking the direct mapping using >>>> walk_page_range_novma(), any particular reason to use lookup_address() >>>> instead ? >>> >>> walk_page_range_novma() wants mmap lock to be taken, but it is tricky as >>> we run here from atomic context in case of crash. >>> >>> I considered using trylock to bypass the limitation, but it is a hack. >>> >>>> >>>>> + size = page_level_size(level); >>>>> + >>>>> + if (pte && pte_decrypted(*pte)) { >>>> >>>> Additionally need to add check for pte_none() here to handle physical memory >>>> holes in direct mapping. >>> >>> lookup_address() returns NULL for none entries. >>> >> >> Looking at lookup_address_in_pgd(), at pte level it is simply returning >> pte_offset_kernel() and there does not seem to be a check for returning NULL >> if pte_none() ? > > Hm. You are right. > > I think it yet another quirk in how lookup_address() implemented. We need > to make it straight too. > > There's two options: either make lookup_address() return pointer for entry > even if it is NULL, or add check for pte_none() after pte_offset_kernel() > and return NULL if it is true. > > I like the first option more as it allows caller to populate the entry if > it wants. Yes, i like the first option. > >>>>> + int pages = size / PAGE_SIZE; >>>>> + >>>>> + /* >>>>> + * Touching memory with shared bit set triggers implicit >>>>> + * conversion to shared. >>>>> + * >>>>> + * Make sure nobody touches the shared range from >>>>> + * now on. >>>>> + * >>>>> + * Bypass unmapping for crash scenario. Unmapping >>>>> + * requires sleepable context, but in crash case kernel >>>>> + * hits the code path with interrupts disabled. >>>> >>>> In case of SNP we will need to temporarily enable interrupts during this >>>> unsharing as we invoke set_memory_encrypted() which then hits a BUG_ON() in >>>> cpa_flush() if interrupts are disabled. >>> >>> Do you really need full set_memory_encrypted()? Can't you do something >>> ligher? >>> >> We need to modify the PTE for setting c-bit to 1 so that will require >> cpa_flush(), though probably can add something lighter to do >> clflush_cache_range() directly ? > > For TDX, I don't touch shared bit as nobody suppose to touch the memory > after the point (ans set_memory_np() enforces it for !crash case). > > Can't SNP do the same? > No, we need to make the PSC call for HV to update the RMP, then set C-bit=1 in the PTE and then do a PVALIDATE to switch the page back to private, so it needs something like a full set_memory_encrypted(). Thanks, Ashish
On 10/6/2023 10:11 AM, Kirill A. Shutemov wrote: > On Fri, Oct 06, 2023 at 07:58:03AM -0700, Sean Christopherson wrote: >> On Thu, Oct 05, 2023, Kirill A. Shutemov wrote: >>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig >>> index 7368d254d01f..b5acf9fb4c70 100644 >>> --- a/arch/x86/Kconfig >>> +++ b/arch/x86/Kconfig >>> @@ -884,6 +884,7 @@ config INTEL_TDX_GUEST >>> select X86_MEM_ENCRYPT >>> select X86_MCE >>> select UNACCEPTED_MEMORY >>> + select EMERGENCY_VIRT_CALLBACK >>> help >>> Support running as a guest under Intel TDX. Without this support, >>> the guest kernel can not boot or run under TDX. >> >> ... >> >>> void __init tdx_early_init(void) >>> { >>> struct tdx_module_args args = { >>> @@ -882,6 +1007,14 @@ void __init tdx_early_init(void) >>> */ >>> x86_cpuinit.parallel_bringup = false; >>> >>> + machine_ops.shutdown = tdx_shutdown; >>> + >>> + /* >>> + * KVM overrides machine_ops.crash_shutdown, use emergency >> >> This is going to be super confusing. KVM utilizes the emergency virt callback. >> The KVM paravirt guest code uses .crash_shutdown(). People that are passingly >> familiar with virt and know what KVM is, but don't already know the difference >> between the two are going to be all kinds of confused. >> >> I also feel like you're playing with fire, e.g. what's to stop the hypervisor >> specific paravirt guest support from using .shutdown() in the future? >> >> And the callback is invoked for far more than just kexec(). I don't see how the >> host can emulate a reboot without destroying and rebuilding the VM, e.g. it can't >> stuff register state to emulate INIT or RESET. Unless I'm missing something, >> converting shared memory back to private for a shutdown or reboot is undesirable >> as adds one more thing that can go wrong and prevent the system from cleanly >> shutting down ASAP (for some definitions of "cleanly"). > > Okay, fair enough. I will look for better way to hookup into kexec > process. That was the best fit I found so far, but yes it is not ideal. > >> Lastly, doesn't SEV need similar behavior? This seems like core functionality >> for any guest with cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT). Why not make the >> "unshare on kexec" code common and gate it with CC_ATTR_GUEST_MEM_ENCRYPT? > > I don't know SEV specifics. I am open to collaboration on this. > > Tom, Ashish, let me know if you need this in generic code. I can arrange > that. > Yes, some kind of a generic interface like unshare_on_kexec() gated with CC_ATTR_GUEST_MEM_ENCRYPT is needed, we can then add SNP specific kexec functionality as part of this. Thanks, Ashish
On 10/05/23 at 04:13pm, Kirill A. Shutemov wrote: > TDX guests allocate shared buffers to perform I/O. It is done by > allocating pages normally from the buddy allocator and converting them > to shared with set_memory_decrypted(). > > The target kernel has no idea what memory is converted this way. It only ~~~~~~~~~~~~~ > sees E820_TYPE_RAM. I finally realized it means the 2nd kernel of kexec rebooting. Maybe we can call it 2nd kernel always, it works for both kexec and kdump jumping. > > Accessing shared memory via private mapping is fatal. It leads to > unrecoverable TD exit. > > On TD shutdown (also covers kexec), walk direct mapping and convert all > shared memory back to private. It makes all RAM private again and target > kernel may use it normally. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > arch/x86/Kconfig | 1 + > arch/x86/coco/tdx/kexec.c | 0 > arch/x86/coco/tdx/tdx.c | 137 +++++++++++++++++++++++++++++++++++++- > 3 files changed, 136 insertions(+), 2 deletions(-) > create mode 100644 arch/x86/coco/tdx/kexec.c > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 7368d254d01f..b5acf9fb4c70 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -884,6 +884,7 @@ config INTEL_TDX_GUEST > select X86_MEM_ENCRYPT > select X86_MCE > select UNACCEPTED_MEMORY > + select EMERGENCY_VIRT_CALLBACK > help > Support running as a guest under Intel TDX. Without this support, > the guest kernel can not boot or run under TDX. > diff --git a/arch/x86/coco/tdx/kexec.c b/arch/x86/coco/tdx/kexec.c > new file mode 100644 > index 000000000000..e69de29bb2d1 > diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c > index 56e152126f20..ac0745303983 100644 > --- a/arch/x86/coco/tdx/tdx.c > +++ b/arch/x86/coco/tdx/tdx.c > @@ -6,6 +6,7 @@ > > #include <linux/cpufeature.h> > #include <linux/debugfs.h> > +#include <linux/delay.h> > #include <linux/export.h> > #include <linux/io.h> > #include <asm/coco.h> > @@ -14,6 +15,8 @@ > #include <asm/insn.h> > #include <asm/insn-eval.h> > #include <asm/pgtable.h> > +#include <asm/reboot.h> > +#include <asm/set_memory.h> > > /* MMIO direction */ > #define EPT_READ 0 > @@ -40,6 +43,9 @@ > > static atomic_long_t nr_shared; > > +static atomic_t conversions_in_progress; > +static bool conversion_allowed = true; > + > static inline bool pte_decrypted(pte_t pte) > { > return cc_mkdec(pte_val(pte)) == pte_val(pte); > @@ -704,6 +710,14 @@ static bool tdx_tlb_flush_required(bool private) > > static bool tdx_cache_flush_required(void) > { > + /* > + * Avoid issuing CLFLUSH on set_memory_decrypted() if conversions > + * stopped. Otherwise it can race with unshare_all_memory() and trigger > + * implicit conversion to shared. > + */ > + if (!conversion_allowed) > + return false; > + > /* > * AMD SME/SEV can avoid cache flushing if HW enforces cache coherence. > * TDX doesn't have such capability. > @@ -787,12 +801,25 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) > static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages, > bool enc) > { > + atomic_inc(&conversions_in_progress); > + > + /* > + * Check after bumping conversions_in_progress to serialize > + * against tdx_shutdown(). > + */ > + if (!conversion_allowed) { > + atomic_dec(&conversions_in_progress); > + return -EBUSY; > + } > + > /* > * Only handle shared->private conversion here. > * See the comment in tdx_early_init(). > */ > - if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) > + if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) { > + atomic_dec(&conversions_in_progress); > return -EIO; > + } > > return 0; > } > @@ -804,17 +831,115 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages, > * Only handle private->shared conversion here. > * See the comment in tdx_early_init(). > */ > - if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) > + if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) { > + atomic_dec(&conversions_in_progress); > return -EIO; > + } > > if (enc) > atomic_long_sub(numpages, &nr_shared); > else > atomic_long_add(numpages, &nr_shared); > > + atomic_dec(&conversions_in_progress); > + > return 0; > } > > +static void unshare_all_memory(bool unmap) > +{ > + unsigned long addr, end; > + long found = 0, shared; > + > + /* > + * Walk direct mapping and convert all shared memory back to private, > + */ > + > + addr = PAGE_OFFSET; > + end = PAGE_OFFSET + get_max_mapped(); > + > + while (addr < end) { > + unsigned long size; > + unsigned int level; > + pte_t *pte; > + > + pte = lookup_address(addr, &level); > + size = page_level_size(level); > + > + if (pte && pte_decrypted(*pte)) { > + int pages = size / PAGE_SIZE; > + > + /* > + * Touching memory with shared bit set triggers implicit > + * conversion to shared. > + * > + * Make sure nobody touches the shared range from > + * now on. > + * > + * Bypass unmapping for crash scenario. Unmapping > + * requires sleepable context, but in crash case kernel > + * hits the code path with interrupts disabled. > + * It shouldn't be a problem as all secondary CPUs are > + * down and kernel runs with interrupts disabled, so > + * there is no room for race. > + */ > + if (unmap) > + set_memory_np(addr, pages); > + > + if (!tdx_enc_status_changed(addr, pages, true)) { > + pr_err("Failed to unshare range %#lx-%#lx\n", > + addr, addr + size); > + } > + > + found += pages; > + } > + > + addr += size; > + } > + > + shared = atomic_long_read(&nr_shared); > + if (shared != found) { > + pr_err("shared page accounting is off\n"); > + pr_err("nr_shared = %ld, nr_found = %ld\n", shared, found); > + } > +} > + > +static void tdx_shutdown(void) > +{ > + unsigned long timeout; > + > + /* > + * Stop new private<->shared conversions and wait for in-flight > + * conversions to complete. > + * > + * Do not wait more than 30 seconds. > + */ > + timeout = 30 * USEC_PER_SEC; > + conversion_allowed = false; > + while (atomic_read(&conversions_in_progress) && timeout--) > + udelay(1); > + > + if (!timeout) > + pr_warn("Failed to finish shared<->private conversions\n"); > + > + unshare_all_memory(true); > + > + native_machine_shutdown(); > +} > + > +static void tdx_crash_shutdown(void) > +{ > + /* > + * Crash can race with private<->shared conversion. > + * > + * There's no clean way out: report and proceed. > + */ > + if (atomic_read(&conversions_in_progress)) > + pr_warn("Failed to finish shared<->private conversions\n"); > + > + unshare_all_memory(false); > +} > + > void __init tdx_early_init(void) > { > struct tdx_module_args args = { > @@ -882,6 +1007,14 @@ void __init tdx_early_init(void) > */ > x86_cpuinit.parallel_bringup = false; > > + machine_ops.shutdown = tdx_shutdown; > + > + /* > + * KVM overrides machine_ops.crash_shutdown, use emergency > + * virt callback instead. > + */ > + cpu_emergency_register_virt_callback(tdx_crash_shutdown); > + > pr_info("Guest detected\n"); > } > > -- > 2.41.0 > > > _______________________________________________ > kexec mailing list > kexec@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/kexec >
On Sun, Oct 08, 2023 at 04:35:27PM +0800, Baoquan He wrote: > On 10/05/23 at 04:13pm, Kirill A. Shutemov wrote: > > TDX guests allocate shared buffers to perform I/O. It is done by > > allocating pages normally from the buddy allocator and converting them > > to shared with set_memory_decrypted(). > > > > The target kernel has no idea what memory is converted this way. It only > ~~~~~~~~~~~~~ > > sees E820_TYPE_RAM. > > I finally realized it means the 2nd kernel of kexec rebooting. Maybe we > can call it 2nd kernel always, it works for both kexec and kdump > jumping. Okay. Will fix. I am new to kexec and I don't know proper terminology :)
On Fri, Oct 06, 2023 at 02:24:11PM -0500, Kalra, Ashish wrote: > > On 10/5/2023 5:28 PM, Kirill A. Shutemov wrote: > > On Thu, Oct 05, 2023 at 05:01:23PM -0500, Kalra, Ashish wrote: > > > On 10/5/2023 4:28 PM, Kirill A. Shutemov wrote: > > > > On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote: > > > > > > +static void unshare_all_memory(bool unmap) > > > > > > +{ > > > > > > + unsigned long addr, end; > > > > > > + long found = 0, shared; > > > > > > + > > > > > > + /* > > > > > > + * Walk direct mapping and convert all shared memory back to private, > > > > > > + */ > > > > > > + > > > > > > + addr = PAGE_OFFSET; > > > > > > + end = PAGE_OFFSET + get_max_mapped(); > > > > > > + > > > > > > + while (addr < end) { > > > > > > + unsigned long size; > > > > > > + unsigned int level; > > > > > > + pte_t *pte; > > > > > > + > > > > > > + pte = lookup_address(addr, &level); > > > > > > > > > > IIRC, you were earlier walking the direct mapping using > > > > > walk_page_range_novma(), any particular reason to use lookup_address() > > > > > instead ? > > > > > > > > walk_page_range_novma() wants mmap lock to be taken, but it is tricky as > > > > we run here from atomic context in case of crash. > > > > > > > > I considered using trylock to bypass the limitation, but it is a hack. > > > > > > > > > > > > > > > + size = page_level_size(level); > > > > > > + > > > > > > + if (pte && pte_decrypted(*pte)) { > > > > > > > > > > Additionally need to add check for pte_none() here to handle physical memory > > > > > holes in direct mapping. > > > > > > > > lookup_address() returns NULL for none entries. > > > > > > > > > > Looking at lookup_address_in_pgd(), at pte level it is simply returning > > > pte_offset_kernel() and there does not seem to be a check for returning NULL > > > if pte_none() ? > > > > Hm. You are right. > > > > I think it yet another quirk in how lookup_address() implemented. We need > > to make it straight too. > > > > There's two options: either make lookup_address() return pointer for entry > > even if it is NULL, or add check for pte_none() after pte_offset_kernel() > > and return NULL if it is true. > > > > I like the first option more as it allows caller to populate the entry if > > it wants. > > Yes, i like the first option. I tried to this, but lookup_address() has to many callers. It gets beyond the scope of the patchset. I will add pte_none() check on unshare side for now.
On Fri, Oct 20, 2023 at 12:21:11PM +0300, Kirill A. Shutemov wrote: > On Fri, Oct 06, 2023 at 02:24:11PM -0500, Kalra, Ashish wrote: > > > > On 10/5/2023 5:28 PM, Kirill A. Shutemov wrote: > > > On Thu, Oct 05, 2023 at 05:01:23PM -0500, Kalra, Ashish wrote: > > > > On 10/5/2023 4:28 PM, Kirill A. Shutemov wrote: > > > > > On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote: > > > > > > > +static void unshare_all_memory(bool unmap) > > > > > > > +{ > > > > > > > + unsigned long addr, end; > > > > > > > + long found = 0, shared; > > > > > > > + > > > > > > > + /* > > > > > > > + * Walk direct mapping and convert all shared memory back to private, > > > > > > > + */ > > > > > > > + > > > > > > > + addr = PAGE_OFFSET; > > > > > > > + end = PAGE_OFFSET + get_max_mapped(); > > > > > > > + > > > > > > > + while (addr < end) { > > > > > > > + unsigned long size; > > > > > > > + unsigned int level; > > > > > > > + pte_t *pte; > > > > > > > + > > > > > > > + pte = lookup_address(addr, &level); > > > > > > > > > > > > IIRC, you were earlier walking the direct mapping using > > > > > > walk_page_range_novma(), any particular reason to use lookup_address() > > > > > > instead ? > > > > > > > > > > walk_page_range_novma() wants mmap lock to be taken, but it is tricky as > > > > > we run here from atomic context in case of crash. > > > > > > > > > > I considered using trylock to bypass the limitation, but it is a hack. > > > > > > > > > > > > > > > > > > + size = page_level_size(level); > > > > > > > + > > > > > > > + if (pte && pte_decrypted(*pte)) { > > > > > > > > > > > > Additionally need to add check for pte_none() here to handle physical memory > > > > > > holes in direct mapping. > > > > > > > > > > lookup_address() returns NULL for none entries. > > > > > > > > > > > > > Looking at lookup_address_in_pgd(), at pte level it is simply returning > > > > pte_offset_kernel() and there does not seem to be a check for returning NULL > > > > if pte_none() ? > > > > > > Hm. You are right. > > > > > > I think it yet another quirk in how lookup_address() implemented. We need > > > to make it straight too. > > > > > > There's two options: either make lookup_address() return pointer for entry > > > even if it is NULL, or add check for pte_none() after pte_offset_kernel() > > > and return NULL if it is true. > > > > > > I like the first option more as it allows caller to populate the entry if > > > it wants. > > > > Yes, i like the first option. > > I tried to this, but lookup_address() has to many callers. It gets beyond > the scope of the patchset. I will add pte_none() check on unshare side for > now. Ah. pte_none() is not need for TDX implementation, as pte_decrypted() check will fail for it. SEV implementation would need an additional check.
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7368d254d01f..b5acf9fb4c70 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -884,6 +884,7 @@ config INTEL_TDX_GUEST select X86_MEM_ENCRYPT select X86_MCE select UNACCEPTED_MEMORY + select EMERGENCY_VIRT_CALLBACK help Support running as a guest under Intel TDX. Without this support, the guest kernel can not boot or run under TDX. diff --git a/arch/x86/coco/tdx/kexec.c b/arch/x86/coco/tdx/kexec.c new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 56e152126f20..ac0745303983 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -6,6 +6,7 @@ #include <linux/cpufeature.h> #include <linux/debugfs.h> +#include <linux/delay.h> #include <linux/export.h> #include <linux/io.h> #include <asm/coco.h> @@ -14,6 +15,8 @@ #include <asm/insn.h> #include <asm/insn-eval.h> #include <asm/pgtable.h> +#include <asm/reboot.h> +#include <asm/set_memory.h> /* MMIO direction */ #define EPT_READ 0 @@ -40,6 +43,9 @@ static atomic_long_t nr_shared; +static atomic_t conversions_in_progress; +static bool conversion_allowed = true; + static inline bool pte_decrypted(pte_t pte) { return cc_mkdec(pte_val(pte)) == pte_val(pte); @@ -704,6 +710,14 @@ static bool tdx_tlb_flush_required(bool private) static bool tdx_cache_flush_required(void) { + /* + * Avoid issuing CLFLUSH on set_memory_decrypted() if conversions + * stopped. Otherwise it can race with unshare_all_memory() and trigger + * implicit conversion to shared. + */ + if (!conversion_allowed) + return false; + /* * AMD SME/SEV can avoid cache flushing if HW enforces cache coherence. * TDX doesn't have such capability. @@ -787,12 +801,25 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages, bool enc) { + atomic_inc(&conversions_in_progress); + + /* + * Check after bumping conversions_in_progress to serialize + * against tdx_shutdown(). + */ + if (!conversion_allowed) { + atomic_dec(&conversions_in_progress); + return -EBUSY; + } + /* * Only handle shared->private conversion here. * See the comment in tdx_early_init(). */ - if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) + if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) { + atomic_dec(&conversions_in_progress); return -EIO; + } return 0; } @@ -804,17 +831,115 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages, * Only handle private->shared conversion here. * See the comment in tdx_early_init(). */ - if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) + if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) { + atomic_dec(&conversions_in_progress); return -EIO; + } if (enc) atomic_long_sub(numpages, &nr_shared); else atomic_long_add(numpages, &nr_shared); + atomic_dec(&conversions_in_progress); + return 0; } +static void unshare_all_memory(bool unmap) +{ + unsigned long addr, end; + long found = 0, shared; + + /* + * Walk direct mapping and convert all shared memory back to private, + */ + + addr = PAGE_OFFSET; + end = PAGE_OFFSET + get_max_mapped(); + + while (addr < end) { + unsigned long size; + unsigned int level; + pte_t *pte; + + pte = lookup_address(addr, &level); + size = page_level_size(level); + + if (pte && pte_decrypted(*pte)) { + int pages = size / PAGE_SIZE; + + /* + * Touching memory with shared bit set triggers implicit + * conversion to shared. + * + * Make sure nobody touches the shared range from + * now on. + * + * Bypass unmapping for crash scenario. Unmapping + * requires sleepable context, but in crash case kernel + * hits the code path with interrupts disabled. + * It shouldn't be a problem as all secondary CPUs are + * down and kernel runs with interrupts disabled, so + * there is no room for race. + */ + if (unmap) + set_memory_np(addr, pages); + + if (!tdx_enc_status_changed(addr, pages, true)) { + pr_err("Failed to unshare range %#lx-%#lx\n", + addr, addr + size); + } + + found += pages; + } + + addr += size; + } + + shared = atomic_long_read(&nr_shared); + if (shared != found) { + pr_err("shared page accounting is off\n"); + pr_err("nr_shared = %ld, nr_found = %ld\n", shared, found); + } +} + +static void tdx_shutdown(void) +{ + unsigned long timeout; + + /* + * Stop new private<->shared conversions and wait for in-flight + * conversions to complete. + * + * Do not wait more than 30 seconds. + */ + timeout = 30 * USEC_PER_SEC; + conversion_allowed = false; + while (atomic_read(&conversions_in_progress) && timeout--) + udelay(1); + + if (!timeout) + pr_warn("Failed to finish shared<->private conversions\n"); + + unshare_all_memory(true); + + native_machine_shutdown(); +} + +static void tdx_crash_shutdown(void) +{ + /* + * Crash can race with private<->shared conversion. + * + * There's no clean way out: report and proceed. + */ + if (atomic_read(&conversions_in_progress)) + pr_warn("Failed to finish shared<->private conversions\n"); + + unshare_all_memory(false); +} + void __init tdx_early_init(void) { struct tdx_module_args args = { @@ -882,6 +1007,14 @@ void __init tdx_early_init(void) */ x86_cpuinit.parallel_bringup = false; + machine_ops.shutdown = tdx_shutdown; + + /* + * KVM overrides machine_ops.crash_shutdown, use emergency + * virt callback instead. + */ + cpu_emergency_register_virt_callback(tdx_crash_shutdown); + pr_info("Guest detected\n"); }