Message ID | 20230606095622.1939-1-kirill.shutemov@linux.intel.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3278702vqr; Tue, 6 Jun 2023 03:07:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6axXCeagaBi9sz29s9jjYl+UevbvDDTk8jANXsNsQe9zu8yZZM1v+eZGRQffXPlvK7WJzb X-Received: by 2002:a05:622a:189:b0:3f4:ef4d:695c with SMTP id s9-20020a05622a018900b003f4ef4d695cmr1463981qtw.29.1686046026007; Tue, 06 Jun 2023 03:07:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686046025; cv=none; d=google.com; s=arc-20160816; b=hWiDF6DwMIGvvYr5BRu5Hy3ZyK3IN1V0r3m93jZYAeyIqjcpeWkBEOB5An1XkhBqEj hawHaDtSyfGAulab+jDCtlkv/DG/nZUDR/jrJ7RvKR165SLJ8Id4SDCyO5Hb2sv954q9 eXjlQlIVrN+oSTxj/qlpxyfMCt4ckZnwMXk770us+VfGOkP8SI1AD+t3ZuPT6TBoyUp0 lPEtYb0MH+BWzsmGjlYForpItBumuGYj1JXwWrEe+B5sogGuRmJ35zRGsZgY95GLkRGh hzNN7qdA5xYqSwzrezre4Hr5qQH/Bt99SG4PnDo/lDZOmi6afqDug+q3rahlVTdsV2eg RL2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=T3Dl6lE396ODnEUhCB++ButNIQ5Le9Rd4gOXh3+keaM=; b=oOEYSuWQ6lS3/FeUF+igRDcyyfnelyUJGaANIxiScAqcXAVvbiSGZOaly+NlN7zpio MHD34NGCkJhWOxaPxJm+IcDa8t8GgN5pWbKnSvv+fFdlwN3MnHWTaYn3WUiCyOwCEJkV NTVAjsDUMiVighigLU2KT6jL9Dl9SwRRlofletZCBUOxKK1oevUrnoBrC9B0zLW9NEn5 DwMc5ZxpSqaH9LkYM/8d8DGLfjTUD9ArQ9NIv8cfBo+GRBjgcnN9z1916HytJQ8zlNlK VAw5hsWkwvdwM2V0jCZnz1L++iG5N3r6PsqracrxlDvj4RiL1Ye0gDqquPxsbn3fYGpo oaaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lrJtaNNn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s33-20020a05622a1aa100b003f9b44c0ab0si321473qtc.571.2023.06.06.03.06.51; Tue, 06 Jun 2023 03:07:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lrJtaNNn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237380AbjFFJ4l (ORCPT <rfc822;xxoosimple@gmail.com> + 99 others); Tue, 6 Jun 2023 05:56:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237326AbjFFJ4f (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 6 Jun 2023 05:56:35 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B4D4E6E for <linux-kernel@vger.kernel.org>; Tue, 6 Jun 2023 02:56:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686045393; x=1717581393; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Z2Dvf4mA1VzzTZjfh3y1HywjdJsGHa7T3oAn6+kI0iA=; b=lrJtaNNnH7FWLmvZEEXbkZAFk8ytjcBEkJ2Ib4EqFMFrn62OEC21CQKD 33J9nQDw+cEDLfV6mN+qFAHhkc0kzel5UZQR1A0wVNg1NQS60bH0UX74e emCSiyUUVVslko9h7woka7NCPhPbpgRDRjJTnOX6F3wEPsAn/wBHQgplw QYJP7g1NNiNfeIQBy/Ouy3+geak7uaY/salg/l1BTpcLB8WPsOX6k15ZR OrcrR2Vq+FM2k6CvZ77n82f3cRrorbyxJqC3nPClwFt12sT+Fs9U4gLGe /KEUnzomyMVIPaGzDuZSHSucXJwQAfSIyz8e4mw1kyvj1YPynxcacXlFl g==; X-IronPort-AV: E=McAfee;i="6600,9927,10732"; a="336249090" X-IronPort-AV: E=Sophos;i="6.00,219,1681196400"; d="scan'208";a="336249090" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 02:56:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10732"; a="853348525" X-IronPort-AV: E=Sophos;i="6.00,219,1681196400"; d="scan'208";a="853348525" Received: from rgraefe-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.58.173]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jun 2023 02:56:29 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 9F11F10A6A4; Tue, 6 Jun 2023 12:56:26 +0300 (+03) From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: dave.hansen@intel.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de Cc: decui@microsoft.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, thomas.lendacky@amd.com, x86@kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv3 0/3] x86/tdx: Fix one more load_unaligned_zeropad() issue Date: Tue, 6 Jun 2023 12:56:19 +0300 Message-Id: <20230606095622.1939-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_PASS, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767947397752061357?= X-GMAIL-MSGID: =?utf-8?q?1767947397752061357?= |
Series |
x86/tdx: Fix one more load_unaligned_zeropad() issue
|
|
Message
Kirill A. Shutemov
June 6, 2023, 9:56 a.m. UTC
During review of TDX guests on Hyper-V patchset Dave pointed to the potential race between changing page private/shared status and load_unaligned_zeropad(). Fix the issue. v3: - Fix grammar; - Add Sathya's Reviewed-bys; v2: - Add more info in commit message of the first patch. - Move enc_status_change_finish_noop() into a separate patch. - Fix typo in commit message and comment. Kirill A. Shutemov (3): x86/mm: Allow guest.enc_status_change_prepare() to fail x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad() x86/mm: Fix enc_status_change_finish_noop() arch/x86/coco/tdx/tdx.c | 64 +++++++++++++++++++++++++++++++-- arch/x86/include/asm/x86_init.h | 2 +- arch/x86/kernel/x86_init.c | 4 +-- arch/x86/mm/mem_encrypt_amd.c | 4 ++- arch/x86/mm/pat/set_memory.c | 3 +- 5 files changed, 69 insertions(+), 8 deletions(-)
Comments
From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, 2023 2:56 AM > > During review of TDX guests on Hyper-V patchset Dave pointed to the > potential race between changing page private/shared status and > load_unaligned_zeropad(). > > Fix the issue. > > v3: > - Fix grammar; > - Add Sathya's Reviewed-bys; > v2: > - Add more info in commit message of the first patch. > - Move enc_status_change_finish_noop() into a separate patch. > - Fix typo in commit message and comment. > > Kirill A. Shutemov (3): > x86/mm: Allow guest.enc_status_change_prepare() to fail > x86/tdx: Fix race between set_memory_encrypted() and > load_unaligned_zeropad() > x86/mm: Fix enc_status_change_finish_noop() > > arch/x86/coco/tdx/tdx.c | 64 +++++++++++++++++++++++++++++++-- > arch/x86/include/asm/x86_init.h | 2 +- > arch/x86/kernel/x86_init.c | 4 +-- > arch/x86/mm/mem_encrypt_amd.c | 4 ++- > arch/x86/mm/pat/set_memory.c | 3 +- > 5 files changed, 69 insertions(+), 8 deletions(-) > > -- > 2.39.3 These fixes notwithstanding, load_unaligned_zeropad() is not handled properly in a TDX VM. The problem was introduced with commit c4e34dd99f2e, which moved the fixup code to function ex_handler_zeropad(). This new function does a verification against fault_addr, and the verification always fails because fault_addr is zero. The call sequence is: exc_virtualization_exception() ve_raise_fault() gp_try_fixup_and_notify() <-- always passes 0 as fault_addr fixup_exception() ex_handler_zeropad() The validation of fault_addr could probably be removed since such validation wasn't there prior to c4e34dd99f2e. But before going down that path, I want to propose a different top-level solution to the interaction between load_unaligned_zeropad() and CoCo VM page transitions between private and shared. When a page is transitioning, the caller can and should ensure that it is not being accessed during the transition. But we have the load_unaligned_zeropad() wildcard. So do the following for the transition sequence in __set_memory_enc_pgtable(): 1. Remove aliasing mappings 2. Remove the PRESENT bit from the PTEs of all transitioning pages 3. Flush the TLB globally 4. Flush the data cache if needed 5. Set/clear the encryption attribute as appropriate 6. Notify the hypervisor of the page status change 7. Add back the PRESENT bit With this approach, load_unaligned_zeropad() just takes the normal page-fault-based fixup path if it touches a page that is transitioning. As a result, load_unaligned_zeropad() and CoCo VM page transitioning are completely decoupled. We don't need to handle architecture-specific CoCo VM exceptions and fix things up. I've posted an RFC PATCH that implements this approach [1], and tested on TDX VMs and SEV-SNP VMs in vTOM mode. The RFC PATCH has more details on the benefits and implications. Follow-up discussion should probably be done on that email thread. Michael [1] https://lore.kernel.org/lkml/1688661719-60329-1-git-send-email-mikelley@microsoft.com/T/#u
On 7/6/23 09:48, Michael Kelley (LINUX) wrote: > When a page is transitioning, the caller can and should ensure > that it is not being accessed during the transition. But we have > the load_unaligned_zeropad() wildcard. So do the following for > the transition sequence in __set_memory_enc_pgtable(): > > 1. Remove aliasing mappings > 2. Remove the PRESENT bit from the PTEs of all transitioning pages > 3. Flush the TLB globally > 4. Flush the data cache if needed > 5. Set/clear the encryption attribute as appropriate > 6. Notify the hypervisor of the page status change > 7. Add back the PRESENT bit > > With this approach, load_unaligned_zeropad() just takes the > normal page-fault-based fixup path if it touches a page that is > transitioning. Yes, this does seem much simpler. It funnels everything through the page fault handler and also doesn't require because careful about the ordering of the private<=>shared conversions.
On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, 2023 2:56 AM > > > > During review of TDX guests on Hyper-V patchset Dave pointed to the > > potential race between changing page private/shared status and > > load_unaligned_zeropad(). > > > > Fix the issue. > > > > v3: > > - Fix grammar; > > - Add Sathya's Reviewed-bys; > > v2: > > - Add more info in commit message of the first patch. > > - Move enc_status_change_finish_noop() into a separate patch. > > - Fix typo in commit message and comment. > > > > Kirill A. Shutemov (3): > > x86/mm: Allow guest.enc_status_change_prepare() to fail > > x86/tdx: Fix race between set_memory_encrypted() and > > load_unaligned_zeropad() > > x86/mm: Fix enc_status_change_finish_noop() > > > > arch/x86/coco/tdx/tdx.c | 64 +++++++++++++++++++++++++++++++-- > > arch/x86/include/asm/x86_init.h | 2 +- > > arch/x86/kernel/x86_init.c | 4 +-- > > arch/x86/mm/mem_encrypt_amd.c | 4 ++- > > arch/x86/mm/pat/set_memory.c | 3 +- > > 5 files changed, 69 insertions(+), 8 deletions(-) > > > > -- > > 2.39.3 > > These fixes notwithstanding, load_unaligned_zeropad() is not handled > properly in a TDX VM. The problem was introduced with commit > c4e34dd99f2e, which moved the fixup code to function > ex_handler_zeropad(). Ughh.. I needed to pay more attention to the rework around load_unaligned_zeropad() :/ > This new function does a verification against > fault_addr, and the verification always fails because fault_addr is zero. > The call sequence is: > > exc_virtualization_exception() > ve_raise_fault() > gp_try_fixup_and_notify() <-- always passes 0 as fault_addr > fixup_exception() > ex_handler_zeropad() > > The validation of fault_addr could probably be removed since > such validation wasn't there prior to c4e34dd99f2e. But before > going down that path, I want to propose a different top-level > solution to the interaction between load_unaligned_zeropad() > and CoCo VM page transitions between private and shared. > > When a page is transitioning, the caller can and should ensure > that it is not being accessed during the transition. But we have > the load_unaligned_zeropad() wildcard. So do the following for > the transition sequence in __set_memory_enc_pgtable(): > > 1. Remove aliasing mappings > 2. Remove the PRESENT bit from the PTEs of all transitioning pages > 3. Flush the TLB globally > 4. Flush the data cache if needed > 5. Set/clear the encryption attribute as appropriate > 6. Notify the hypervisor of the page status change > 7. Add back the PRESENT bit > > With this approach, load_unaligned_zeropad() just takes the > normal page-fault-based fixup path if it touches a page that is > transitioning. As a result, load_unaligned_zeropad() and CoCo > VM page transitioning are completely decoupled. We don't > need to handle architecture-specific CoCo VM exceptions and > fix things up. It only addresses the problem that happens on transition, but load_unaligned_zeropad() is still a problem for the shared mappings in general, after transition is complete. Like if load_unaligned_zeropad() steps from private to shared mapping and shared mapping triggers #VE, kernel should be able to handle it. The testcase that triggers the problem (It is ugly, but useful.): #include <linux/mm.h> #include <asm/word-at-a-time.h> static int f(pte_t *pte, unsigned long addr, void *data) { pte_t ***p = data; if (p) { *(*p) = pte; (*p)++; } return 0; } static struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes) { struct vm_struct *area; area = get_vm_area_caller(size, VM_IOREMAP, __builtin_return_address(0)); if (area == NULL) return NULL; if (apply_to_page_range(&init_mm, (unsigned long)area->addr, size, f, ptes ? &ptes : NULL)) { free_vm_area(area); return NULL; } return area; } #define SHARED_PFN_TRIGGERS_VE 0x80000 static int test_zeropad(void) { struct vm_struct *area; pte_t *pte[2]; unsigned long a; struct page *page; page = alloc_page(GFP_KERNEL); area = alloc_vm_area(2 * PAGE_SIZE, pte); set_pte_at(&init_mm, (unsigned long)area->addr, pte[0], pfn_pte(page_to_pfn(page), PAGE_KERNEL)); set_pte_at(&init_mm, (unsigned long)(area->addr + PAGE_SIZE), pte[1], pfn_pte(SHARED_PFN_TRIGGERS_VE, pgprot_decrypted(PAGE_KERNEL))); a = load_unaligned_zeropad(area->addr + PAGE_SIZE - 1); printk("a: %#lx\n", a); for(;;); return 0; } late_initcall(test_zeropad); Below is a patch that provides fixup code with the address it wants. Any comments? diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 58b1f208eff5..4a817d20ce3b 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -697,9 +697,10 @@ static bool try_fixup_enqcmd_gp(void) } static bool gp_try_fixup_and_notify(struct pt_regs *regs, int trapnr, - unsigned long error_code, const char *str) + unsigned long error_code, const char *str, + unsigned long address) { - if (fixup_exception(regs, trapnr, error_code, 0)) + if (fixup_exception(regs, trapnr, error_code, address)) return true; current->thread.error_code = error_code; @@ -759,7 +760,7 @@ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection) goto exit; } - if (gp_try_fixup_and_notify(regs, X86_TRAP_GP, error_code, desc)) + if (gp_try_fixup_and_notify(regs, X86_TRAP_GP, error_code, desc, 0)) goto exit; if (error_code) @@ -1357,17 +1358,20 @@ DEFINE_IDTENTRY(exc_device_not_available) #define VE_FAULT_STR "VE fault" -static void ve_raise_fault(struct pt_regs *regs, long error_code) +static void ve_raise_fault(struct pt_regs *regs, long error_code, + unsigned long address) { if (user_mode(regs)) { gp_user_force_sig_segv(regs, X86_TRAP_VE, error_code, VE_FAULT_STR); return; } - if (gp_try_fixup_and_notify(regs, X86_TRAP_VE, error_code, VE_FAULT_STR)) + if (gp_try_fixup_and_notify(regs, X86_TRAP_VE, error_code, + VE_FAULT_STR, address)) { return; + } - die_addr(VE_FAULT_STR, regs, error_code, 0); + die_addr(VE_FAULT_STR, regs, error_code, address); } /* @@ -1431,7 +1435,7 @@ DEFINE_IDTENTRY(exc_virtualization_exception) * it successfully, treat it as #GP(0) and handle it. */ if (!tdx_handle_virt_exception(regs, &ve)) - ve_raise_fault(regs, 0); + ve_raise_fault(regs, 0, ve.gla); cond_local_irq_disable(regs); }
From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, 2023 2:56 AM [snip] > > It only addresses the problem that happens on transition, but > load_unaligned_zeropad() is still a problem for the shared mappings in > general, after transition is complete. Like if load_unaligned_zeropad() > steps from private to shared mapping and shared mapping triggers #VE, > kernel should be able to handle it. I'm showing my ignorance of TDX architectural details, but what's the situation where shared mappings in general can trigger a #VE? How do such situations get handled for references that aren't from load_unaligned_zeropad()? > > The testcase that triggers the problem (It is ugly, but useful.): > > #include <linux/mm.h> > #include <asm/word-at-a-time.h> > > static int f(pte_t *pte, unsigned long addr, void *data) > { > pte_t ***p = data; > > if (p) { > *(*p) = pte; > (*p)++; > } > return 0; > } > > static struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes) > { > struct vm_struct *area; > > area = get_vm_area_caller(size, VM_IOREMAP, > __builtin_return_address(0)); > if (area == NULL) > return NULL; > > if (apply_to_page_range(&init_mm, (unsigned long)area->addr, > size, f, ptes ? &ptes : NULL)) { > free_vm_area(area); > return NULL; > } > > return area; > } > > #define SHARED_PFN_TRIGGERS_VE 0x80000 > > static int test_zeropad(void) > { > struct vm_struct *area; > pte_t *pte[2]; > unsigned long a; > struct page *page; > > page = alloc_page(GFP_KERNEL); > area = alloc_vm_area(2 * PAGE_SIZE, pte); > > set_pte_at(&init_mm, (unsigned long)area->addr, pte[0], > pfn_pte(page_to_pfn(page), PAGE_KERNEL)); > set_pte_at(&init_mm, (unsigned long)(area->addr + PAGE_SIZE), pte[1], > pfn_pte(SHARED_PFN_TRIGGERS_VE, > pgprot_decrypted(PAGE_KERNEL))); > > a = load_unaligned_zeropad(area->addr + PAGE_SIZE - 1); > printk("a: %#lx\n", a); > for(;;); > return 0; > } > late_initcall(test_zeropad); > > Below is a patch that provides fixup code with the address it wants. > > Any comments? This looks good to me. I applied the diff to a TDX VM running on Hyper-V. When a load_unaligned_zeropad() occurs on a page that is transitioning between private and shared, the zeropad fixup is now done correctly via the #VE handler. (This is *without* my RFC patch to mark the pages invalid during a transition.) Michael > > diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c > index 58b1f208eff5..4a817d20ce3b 100644 > --- a/arch/x86/kernel/traps.c > +++ b/arch/x86/kernel/traps.c > @@ -697,9 +697,10 @@ static bool try_fixup_enqcmd_gp(void) > } > > static bool gp_try_fixup_and_notify(struct pt_regs *regs, int trapnr, > - unsigned long error_code, const char *str) > + unsigned long error_code, const char *str, > + unsigned long address) > { > - if (fixup_exception(regs, trapnr, error_code, 0)) > + if (fixup_exception(regs, trapnr, error_code, address)) > return true; > > current->thread.error_code = error_code; > @@ -759,7 +760,7 @@ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection) > goto exit; > } > > - if (gp_try_fixup_and_notify(regs, X86_TRAP_GP, error_code, desc)) > + if (gp_try_fixup_and_notify(regs, X86_TRAP_GP, error_code, desc, 0)) > goto exit; > > if (error_code) > @@ -1357,17 +1358,20 @@ DEFINE_IDTENTRY(exc_device_not_available) > > #define VE_FAULT_STR "VE fault" > > -static void ve_raise_fault(struct pt_regs *regs, long error_code) > +static void ve_raise_fault(struct pt_regs *regs, long error_code, > + unsigned long address) > { > if (user_mode(regs)) { > gp_user_force_sig_segv(regs, X86_TRAP_VE, error_code, > VE_FAULT_STR); > return; > } > > - if (gp_try_fixup_and_notify(regs, X86_TRAP_VE, error_code, VE_FAULT_STR)) > + if (gp_try_fixup_and_notify(regs, X86_TRAP_VE, error_code, > + VE_FAULT_STR, address)) { > return; > + } > > - die_addr(VE_FAULT_STR, regs, error_code, 0); > + die_addr(VE_FAULT_STR, regs, error_code, address); > } > > /* > @@ -1431,7 +1435,7 @@ DEFINE_IDTENTRY(exc_virtualization_exception) > * it successfully, treat it as #GP(0) and handle it. > */ > if (!tdx_handle_virt_exception(regs, &ve)) > - ve_raise_fault(regs, 0); > + ve_raise_fault(regs, 0, ve.gla); > > cond_local_irq_disable(regs); > } > -- > Kiryl Shutsemau / Kirill A. Shutemov
On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM > > > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, 2023 2:56 AM > > [snip] > > > > > It only addresses the problem that happens on transition, but > > load_unaligned_zeropad() is still a problem for the shared mappings in > > general, after transition is complete. Like if load_unaligned_zeropad() > > steps from private to shared mapping and shared mapping triggers #VE, > > kernel should be able to handle it. > > I'm showing my ignorance of TDX architectural details, but what's the > situation where shared mappings in general can trigger a #VE? How > do such situations get handled for references that aren't from > load_unaligned_zeropad()? > Shared mappings are under host/VMM control. It can just not map the page in shared-ept and trigger ept-violation #VE. > > Any comments? > > This looks good to me. I applied the diff to a TDX VM running on > Hyper-V. When a load_unaligned_zeropad() occurs on a page that is > transitioning between private and shared, the zeropad fixup is now > done correctly via the #VE handler. (This is *without* my RFC patch to > mark the pages invalid during a transition.) Great. I am at vacation for the next two weeks. I will prepare a proper patch when I am back. Feel free to make patch yourself if you feel it is urgent.
On 7/9/23 01:09, Kirill A. Shutemov wrote: > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: >> From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM >>> >>> On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: >>>> From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, 2023 2:56 AM >> >> [snip] >> >>> >>> It only addresses the problem that happens on transition, but >>> load_unaligned_zeropad() is still a problem for the shared mappings in >>> general, after transition is complete. Like if load_unaligned_zeropad() >>> steps from private to shared mapping and shared mapping triggers #VE, >>> kernel should be able to handle it. >> >> I'm showing my ignorance of TDX architectural details, but what's the >> situation where shared mappings in general can trigger a #VE? How >> do such situations get handled for references that aren't from >> load_unaligned_zeropad()? >> > > Shared mappings are under host/VMM control. It can just not map the page > in shared-ept and trigger ept-violation #VE. > >>> Any comments? >> >> This looks good to me. I applied the diff to a TDX VM running on >> Hyper-V. When a load_unaligned_zeropad() occurs on a page that is >> transitioning between private and shared, the zeropad fixup is now >> done correctly via the #VE handler. (This is *without* my RFC patch to >> mark the pages invalid during a transition.) > > Great. > > I am at vacation for the next two weeks. I will prepare a proper patch > when I am back. Feel free to make patch yourself if you feel it is urgent. > Michael, Are you still pursuing the RFC patch, then? Just trying to decide whether a patch will be needed for SNP... Thanks, Tom
From: Tom Lendacky <thomas.lendacky@amd.com> Sent: Monday, July 10, 2023 6:59 AM > > On 7/9/23 01:09, Kirill A. Shutemov wrote: > > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > >> From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM > >>> > >>> On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > >>>> From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, > 2023 2:56 AM > >> > >> [snip] > >> > >>> > >>> It only addresses the problem that happens on transition, but > >>> load_unaligned_zeropad() is still a problem for the shared mappings in > >>> general, after transition is complete. Like if load_unaligned_zeropad() > >>> steps from private to shared mapping and shared mapping triggers #VE, > >>> kernel should be able to handle it. > >> > >> I'm showing my ignorance of TDX architectural details, but what's the > >> situation where shared mappings in general can trigger a #VE? How > >> do such situations get handled for references that aren't from > >> load_unaligned_zeropad()? > >> > > > > Shared mappings are under host/VMM control. It can just not map the page > > in shared-ept and trigger ept-violation #VE. > > > >>> Any comments? > >> > >> This looks good to me. I applied the diff to a TDX VM running on > >> Hyper-V. When a load_unaligned_zeropad() occurs on a page that is > >> transitioning between private and shared, the zeropad fixup is now > >> done correctly via the #VE handler. (This is *without* my RFC patch to > >> mark the pages invalid during a transition.) > > > > Great. > > > > I am at vacation for the next two weeks. I will prepare a proper patch > > when I am back. Feel free to make patch yourself if you feel it is urgent. > > > > Michael, > > Are you still pursuing the RFC patch, then? Just trying to decide whether > a patch will be needed for SNP... > Yes, I'm still pursuing the RFC patch. In addition to solving the SNP problems, I think there are some benefits with TDX. But I need to have further discussion with Kirill, which may be delayed a bit while he's out on vacation. Michael
From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Saturday, July 8, 2023 11:09 PM > > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM > > > > > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, > 2023 2:56 AM > > > > [snip] > > > > > > > > It only addresses the problem that happens on transition, but > > > load_unaligned_zeropad() is still a problem for the shared mappings in > > > general, after transition is complete. Like if load_unaligned_zeropad() > > > steps from private to shared mapping and shared mapping triggers #VE, > > > kernel should be able to handle it. > > > > I'm showing my ignorance of TDX architectural details, but what's the > > situation where shared mappings in general can trigger a #VE? How > > do such situations get handled for references that aren't from > > load_unaligned_zeropad()? > > > > Shared mappings are under host/VMM control. It can just not map the page > in shared-ept and trigger ept-violation #VE. I know you are out on vacation, but let me follow up now for further discussion when you are back. Isn't the scenario you are describing a malfunctioning or malicious host/VMM? Would what you are describing be done as part of normal operation? Kernel code must have switched the page from private to shared for some purpose. As soon as that code (which presumably does not have any entry in the exception table) touches the page, it would take the #VE and the enter the die path because there's no fixup. So is there value in having load_unaligned_zeropad() handle the #VE and succeed where a normal reference would fail? I'd still like to see the private <-> shared transition code mark the pages as invalid during the transition, and avoid the possibility of #VE and similar cases with SEV-SNP. Such approach reduces (eliminates?) entanglement between CoCo-specific exceptions and load_unaligned_zeropad(). It also greatly simplifies TD Partition cases and SEV-SNP cases where a paravisor is used. But maybe I'm still missing a case where code must handle the #VE for load_unaligned_zeropad() outside of private <-> shared transitions. Michael > > > > Any comments? > > > > This looks good to me. I applied the diff to a TDX VM running on > > Hyper-V. When a load_unaligned_zeropad() occurs on a page that is > > transitioning between private and shared, the zeropad fixup is now > > done correctly via the #VE handler. (This is *without* my RFC patch to > > mark the pages invalid during a transition.) > > Great. > > I am at vacation for the next two weeks. I will prepare a proper patch > when I am back. Feel free to make patch yourself if you feel it is urgent. > > -- > Kiryl Shutsemau / Kirill A. Shutemov
On Thu, Jul 13, 2023 at 02:43:39PM +0000, Michael Kelley (LINUX) wrote: > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Saturday, July 8, 2023 11:09 PM > > > > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM > > > > > > > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > > > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, > > 2023 2:56 AM > > > > > > [snip] > > > > > > > > > > > It only addresses the problem that happens on transition, but > > > > load_unaligned_zeropad() is still a problem for the shared mappings in > > > > general, after transition is complete. Like if load_unaligned_zeropad() > > > > steps from private to shared mapping and shared mapping triggers #VE, > > > > kernel should be able to handle it. > > > > > > I'm showing my ignorance of TDX architectural details, but what's the > > > situation where shared mappings in general can trigger a #VE? How > > > do such situations get handled for references that aren't from > > > load_unaligned_zeropad()? > > > > > > > Shared mappings are under host/VMM control. It can just not map the page > > in shared-ept and trigger ept-violation #VE. > > I know you are out on vacation, but let me follow up now for further > discussion when you are back. > > Isn't the scenario you are describing a malfunctioning or malicious > host/VMM? Would what you are describing be done as part of normal > operation? Kernel code must have switched the page from private to > shared for some purpose. As soon as that code (which presumably > does not have any entry in the exception table) touches the page, it > would take the #VE and the enter the die path because there's no fixup. > So is there value in having load_unaligned_zeropad() handle the #VE and > succeed where a normal reference would fail? #VE on shared memory is legitimately used for MMIO. But MMIO region is usually separate from the real memory in physical address space. But we also have DMA. DMA pages allocated from common pool of memory and they can be next to dentry cache that kernel accesses with load_unaligned_zeropad(). DMA pages are shared, but they usually backed by memory and not cause #VE. However shared memory is under full control from VMM and VMM can remove page at any point which would make platform to deliver #VE to the guest. This is pathological scenario, but I think it still worth handling gracefully. > I'd still like to see the private <-> shared transition code mark the pages > as invalid during the transition, and avoid the possibility of #VE and > similar cases with SEV-SNP. Such approach reduces (eliminates?) > entanglement between CoCo-specific exceptions and > load_unaligned_zeropad(). It also greatly simplifies TD Partition cases > and SEV-SNP cases where a paravisor is used. I doesn't eliminates issue for TDX as the scenario above is not transient. It can happen after memory is converted to shared.
From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Monday, July 24, 2023 4:19 PM > > On Thu, Jul 13, 2023 at 02:43:39PM +0000, Michael Kelley (LINUX) wrote: > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Saturday, July 8, 2023 11:09 PM > > > > > > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > > > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM > > > > > > > > > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > > > > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, 2023 2:56 AM > > > > > > > > [snip] > > > > > > > > > > > > > > It only addresses the problem that happens on transition, but > > > > > load_unaligned_zeropad() is still a problem for the shared mappings in > > > > > general, after transition is complete. Like if load_unaligned_zeropad() > > > > > steps from private to shared mapping and shared mapping triggers #VE, > > > > > kernel should be able to handle it. > > > > > > > > I'm showing my ignorance of TDX architectural details, but what's the > > > > situation where shared mappings in general can trigger a #VE? How > > > > do such situations get handled for references that aren't from > > > > load_unaligned_zeropad()? > > > > > > > > > > Shared mappings are under host/VMM control. It can just not map the page > > > in shared-ept and trigger ept-violation #VE. > > > > I know you are out on vacation, but let me follow up now for further > > discussion when you are back. > > > > Isn't the scenario you are describing a malfunctioning or malicious > > host/VMM? Would what you are describing be done as part of normal > > operation? Kernel code must have switched the page from private to > > shared for some purpose. As soon as that code (which presumably > > does not have any entry in the exception table) touches the page, it > > would take the #VE and the enter the die path because there's no fixup. > > So is there value in having load_unaligned_zeropad() handle the #VE and > > succeed where a normal reference would fail? > > #VE on shared memory is legitimately used for MMIO. But MMIO region is > usually separate from the real memory in physical address space. > > But we also have DMA. > > DMA pages allocated from common pool of memory and they can be next to > dentry cache that kernel accesses with load_unaligned_zeropad(). DMA pages > are shared, but they usually backed by memory and not cause #VE. However > shared memory is under full control from VMM and VMM can remove page at > any point which would make platform to deliver #VE to the guest. This is > pathological scenario, but I think it still worth handling gracefully. Yep, pages targeted by DMA have been transitioned to shared, and could be scattered anywhere in the guest physical address space. Such a page could be touched by load_unaligned_zeropad(). But could you elaborate more on the "pathological scenario" where the guest physical page isn't backed by memory? Sure, the VMM can remove the page at any point, but why would it do so? Is doing so a legitimate part of the host/guest contract that the guest must handle cleanly? More importantly, what is the guest expected to do in such a case? I would expect that at some point such a DMA page is accessed by a guest vCPU with an explicit reference that is not load_unaligned_zeropad(). Then the guest would take a #VE that goes down the #GP path and invokes die(). I don't object to make the load_unaligned_zeropad() fixup path work correctly in response to a #VE, as it seems like general goodness. I'm just trying to make sure I understand the nuances of "why". :-) > > > I'd still like to see the private <-> shared transition code mark the pages > > as invalid during the transition, and avoid the possibility of #VE and > > similar cases with SEV-SNP. Such approach reduces (eliminates?) > > entanglement between CoCo-specific exceptions and > > load_unaligned_zeropad(). It also greatly simplifies TD Partition cases > > and SEV-SNP cases where a paravisor is used. > > I doesn't eliminates issue for TDX as the scenario above is not transient. > It can happen after memory is converted to shared. Notwithstanding, do you have any objections to the private <-> shared transition code being changed so it won't be the cause of #VE, and similar on SEV-SNP? Michael
On Tue, Jul 25, 2023 at 03:51:24PM +0000, Michael Kelley (LINUX) wrote: > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Monday, July 24, 2023 4:19 PM > > > > On Thu, Jul 13, 2023 at 02:43:39PM +0000, Michael Kelley (LINUX) wrote: > > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Saturday, July 8, 2023 11:09 PM > > > > > > > > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > > > > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 7:07 AM > > > > > > > > > > > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > > > > > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, June 6, 2023 2:56 AM > > > > > > > > > > [snip] > > > > > > > > > > > > > > > > > It only addresses the problem that happens on transition, but > > > > > > load_unaligned_zeropad() is still a problem for the shared mappings in > > > > > > general, after transition is complete. Like if load_unaligned_zeropad() > > > > > > steps from private to shared mapping and shared mapping triggers #VE, > > > > > > kernel should be able to handle it. > > > > > > > > > > I'm showing my ignorance of TDX architectural details, but what's the > > > > > situation where shared mappings in general can trigger a #VE? How > > > > > do such situations get handled for references that aren't from > > > > > load_unaligned_zeropad()? > > > > > > > > > > > > > Shared mappings are under host/VMM control. It can just not map the page > > > > in shared-ept and trigger ept-violation #VE. > > > > > > I know you are out on vacation, but let me follow up now for further > > > discussion when you are back. > > > > > > Isn't the scenario you are describing a malfunctioning or malicious > > > host/VMM? Would what you are describing be done as part of normal > > > operation? Kernel code must have switched the page from private to > > > shared for some purpose. As soon as that code (which presumably > > > does not have any entry in the exception table) touches the page, it > > > would take the #VE and the enter the die path because there's no fixup. > > > So is there value in having load_unaligned_zeropad() handle the #VE and > > > succeed where a normal reference would fail? > > > > #VE on shared memory is legitimately used for MMIO. But MMIO region is > > usually separate from the real memory in physical address space. > > > > But we also have DMA. > > > > DMA pages allocated from common pool of memory and they can be next to > > dentry cache that kernel accesses with load_unaligned_zeropad(). DMA pages > > are shared, but they usually backed by memory and not cause #VE. However > > shared memory is under full control from VMM and VMM can remove page at > > any point which would make platform to deliver #VE to the guest. This is > > pathological scenario, but I think it still worth handling gracefully. > > Yep, pages targeted by DMA have been transitioned to shared, and could > be scattered anywhere in the guest physical address space. Such a page > could be touched by load_unaligned_zeropad(). But could you elaborate > more on the "pathological scenario" where the guest physical page isn't > backed by memory? > > Sure, the VMM can remove the page at any point, but why would it do > so? Is doing so a legitimate part of the host/guest contract that the guest > must handle cleanly? More importantly, what is the guest expected to > do in such a case? I would expect that at some point such a DMA page > is accessed by a guest vCPU with an explicit reference that is not > load_unaligned_zeropad(). Then the guest would take a #VE that > goes down the #GP path and invokes die(). > > I don't object to make the load_unaligned_zeropad() fixup path work > correctly in response to a #VE, as it seems like general goodness. I'm > just trying to make sure I understand the nuances of "why". :-) We actually triggered the issue during internal testing. I wrote initial patch after that. But looking back on the report I don't have an answer why the page triggered #VE. Maybe VMM or virtual BIOS screwed up and put MMIO next to normal memory, I donno. > > > I'd still like to see the private <-> shared transition code mark the pages > > > as invalid during the transition, and avoid the possibility of #VE and > > > similar cases with SEV-SNP. Such approach reduces (eliminates?) > > > entanglement between CoCo-specific exceptions and > > > load_unaligned_zeropad(). It also greatly simplifies TD Partition cases > > > and SEV-SNP cases where a paravisor is used. > > > > I doesn't eliminates issue for TDX as the scenario above is not transient. > > It can happen after memory is converted to shared. > > Notwithstanding, do you have any objections to the private <-> shared > transition code being changed so it won't be the cause of #VE, and > similar on SEV-SNP? I am not yet convinced it is needed. But let's see the code.
From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, July 25, 2023 4:13 PM > > On Tue, Jul 25, 2023 at 03:51:24PM +0000, Michael Kelley (LINUX) wrote: > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Monday, July 24, > 2023 4:19 PM > > > > > > On Thu, Jul 13, 2023 at 02:43:39PM +0000, Michael Kelley (LINUX) wrote: > > > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Saturday, July 8, 2023 > 11:09 PM > > > > > > > > > > On Sat, Jul 08, 2023 at 11:53:08PM +0000, Michael Kelley (LINUX) wrote: > > > > > > From: Kirill A. Shutemov <kirill@shutemov.name> Sent: Friday, July 7, 2023 > 7:07 AM > > > > > > > > > > > > > > On Thu, Jul 06, 2023 at 04:48:32PM +0000, Michael Kelley (LINUX) wrote: > > > > > > > > From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Sent: Tuesday, > June 6, 2023 2:56 AM > > > > > > > > > > > > [snip] > > > > > > > > > > > > > > > > > > > > It only addresses the problem that happens on transition, but > > > > > > > load_unaligned_zeropad() is still a problem for the shared mappings in > > > > > > > general, after transition is complete. Like if load_unaligned_zeropad() > > > > > > > steps from private to shared mapping and shared mapping triggers #VE, > > > > > > > kernel should be able to handle it. > > > > > > > > > > > > I'm showing my ignorance of TDX architectural details, but what's the > > > > > > situation where shared mappings in general can trigger a #VE? How > > > > > > do such situations get handled for references that aren't from > > > > > > load_unaligned_zeropad()? > > > > > > > > > > > > > > > > Shared mappings are under host/VMM control. It can just not map the page > > > > > in shared-ept and trigger ept-violation #VE. > > > > > > > > I know you are out on vacation, but let me follow up now for further > > > > discussion when you are back. > > > > > > > > Isn't the scenario you are describing a malfunctioning or malicious > > > > host/VMM? Would what you are describing be done as part of normal > > > > operation? Kernel code must have switched the page from private to > > > > shared for some purpose. As soon as that code (which presumably > > > > does not have any entry in the exception table) touches the page, it > > > > would take the #VE and the enter the die path because there's no fixup. > > > > So is there value in having load_unaligned_zeropad() handle the #VE and > > > > succeed where a normal reference would fail? > > > > > > #VE on shared memory is legitimately used for MMIO. But MMIO region is > > > usually separate from the real memory in physical address space. > > > > > > But we also have DMA. > > > > > > DMA pages allocated from common pool of memory and they can be next to > > > dentry cache that kernel accesses with load_unaligned_zeropad(). DMA pages > > > are shared, but they usually backed by memory and not cause #VE. However > > > shared memory is under full control from VMM and VMM can remove page at > > > any point which would make platform to deliver #VE to the guest. This is > > > pathological scenario, but I think it still worth handling gracefully. > > > > Yep, pages targeted by DMA have been transitioned to shared, and could > > be scattered anywhere in the guest physical address space. Such a page > > could be touched by load_unaligned_zeropad(). But could you elaborate > > more on the "pathological scenario" where the guest physical page isn't > > backed by memory? > > > > Sure, the VMM can remove the page at any point, but why would it do > > so? Is doing so a legitimate part of the host/guest contract that the guest > > must handle cleanly? More importantly, what is the guest expected to > > do in such a case? I would expect that at some point such a DMA page > > is accessed by a guest vCPU with an explicit reference that is not > > load_unaligned_zeropad(). Then the guest would take a #VE that > > goes down the #GP path and invokes die(). > > > > I don't object to make the load_unaligned_zeropad() fixup path work > > correctly in response to a #VE, as it seems like general goodness. I'm > > just trying to make sure I understand the nuances of "why". :-) > > We actually triggered the issue during internal testing. I wrote initial > patch after that. But looking back on the report I don't have an answer > why the page triggered #VE. Maybe VMM or virtual BIOS screwed up and put > MMIO next to normal memory, I donno. > > > > > I'd still like to see the private <-> shared transition code mark the pages > > > > as invalid during the transition, and avoid the possibility of #VE and > > > > similar cases with SEV-SNP. Such approach reduces (eliminates?) > > > > entanglement between CoCo-specific exceptions and > > > > load_unaligned_zeropad(). It also greatly simplifies TD Partition cases > > > > and SEV-SNP cases where a paravisor is used. > > > > > > I doesn't eliminates issue for TDX as the scenario above is not transient. > > > It can happen after memory is converted to shared. > > > > Notwithstanding, do you have any objections to the private <-> shared > > transition code being changed so it won't be the cause of #VE, and > > similar on SEV-SNP? > > I am not yet convinced it is needed. But let's see the code. > Fair enough. But similarly, I'm not convinced that we have load_unaligned_zeropad() cases outside of private <-> shared transitions that *must* be fixed up cleanly. :-) Here's the RFC patch, and it includes a couple of open questions: https://lore.kernel.org/lkml/1688661719-60329-1-git-send-email-mikelley@microsoft.com/ In TD Partitioning configurations, I'm hoping we can eliminate #VE EPT violations needing to be handled in the L2 guest. The L1 guest will handle things like MMIO. But the L1 guest can't easily detect that a #VE with EPT violation is due load_unaligned_zeropad() in the L2. And trying to reflect the #VE from the L1 guest to the L2 guest wades into a bunch of problems that I want to avoid. The same problems arise in SEV-SNP with a paravisor running at VMPL 0. Changing the private <-> shared transition code prevents the problems there as well. Michael