From patchwork Thu May 18 23:14:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 96099 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp860202vqo; Thu, 18 May 2023 16:16:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5XPF0KRNqNu7N3Wzc4iTEMHZe3CM7jYDzjlryZkwxqlG9ognHvHc+TR9dk6ws2laZc6sQa X-Received: by 2002:a17:90b:4f4e:b0:24e:4231:ec6b with SMTP id pj14-20020a17090b4f4e00b0024e4231ec6bmr171944pjb.21.1684451796391; Thu, 18 May 2023 16:16:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684451796; cv=none; d=google.com; s=arc-20160816; b=hCi5+B6Bv3G15qzGjPy+9zcmmaCUW6km/qWuSgguK09LiEhVUvDI2QOkbwgLIct2Lg Apcg8ZtW32g4ZokfciAh0THcRyUcp52ma8d6UmL3XXMUr8qe26XZKAiZXtzppjbGuIz9 UqkKT6Ky68PEEQBgiJ0AjNLFJ9iA+yQtH9BXqPntLyf6yj2JR9SA+c/KG/Q/nZ7dAqjl 9UZks2Q5RH7qZJrq/KvSN/BqT906pFc6Lk7DfRqB0F1JypAksi6qrf7KHJ8c3juxNmIa jkj6P0BzTQV29B9QZ7WWBiigs88vkXvuZmkHTx5CyeHSPVWlRr+QIiGlBQimAT8Biu85 zLRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oGRDwK5r+gHSrZclkxM5aTLGsgT2GEXWwSoW4pfzbqM=; b=Y4XP2lmjm0GNP/R8KOxrXPSSmKs5bBumBO29qMOokL5NwY6ZpVOnHPZmpHjt17ZNmk k4VyEzSohQ4+iuT+dpV2ndKYyUl3RgXa9nMJNHtrWMLWTVRJXVpqLWQdfeA/QTFtdCZq 4YV7FLxCGMM+IPdyWWFVdSISelrp3Me7+qx841fWN/cX2eK5mCMNpxvnj/1Ks8jwjVAU WKlsLrjQw+xxSKZYDCSAE+o8SKVx8Nd1v5sZOcK2lXtlC12VhTWVSbCw9S5HYolpebyL ZgJHED1nRffjj4puoafmO+4bkeTFlMqHWQ8zBgOlAWE6dznXo2bec1KlIJRweME4RbAK dtqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="kPQpt/hQ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k18-20020a637b52000000b0051b1542d6efsi1395281pgn.213.2023.05.18.16.16.23; Thu, 18 May 2023 16:16:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="kPQpt/hQ"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229620AbjERXPp (ORCPT + 99 others); Thu, 18 May 2023 19:15:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230410AbjERXPD (ORCPT ); Thu, 18 May 2023 19:15:03 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84177E52; Thu, 18 May 2023 16:15:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684451702; x=1715987702; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aHz7WO7wShdd679ArCfVvIALDE990d9+23AGlVsKS9M=; b=kPQpt/hQp2zOvYWKoaHJ27sf1IXWTzGjlVfEnsvE7eaChQsO1dfDEXr9 1A0jVrSi/xwGoVI/jTs0g3lsSmUUw3a49SOpjYpu2Q1We6lNLyb3QzUMk m8PcMuiMrAYH7zYYpJm+innrkmLhcRAFFFqpxsGABkmYRzAy7HJeSmfYM xDFMvJqeIufWrmrY4Bu75Ckmbv2e5cl/4CtDiXYlPqH1t6+dzrNJxOX2j to0/tOm+vhVLHKRFKR9FnOKVPM67Pv29O/iVGDOGguzhDyUv78bzcEb9S /2eVlt0zWAz6U0zGdTBgRLOiP5lXz38p3OdLuznX3jyoK+8RCay/YDz8r A==; X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="341652146" X-IronPort-AV: E=Sophos;i="6.00,175,1681196400"; d="scan'208";a="341652146" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 16:15:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="772065105" X-IronPort-AV: E=Sophos;i="6.00,175,1681196400"; d="scan'208";a="772065105" Received: from rkiyama-mobl1.amr.corp.intel.com (HELO box.shutemov.name) ([10.251.222.16]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 16:14:52 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 5530610DFD2; Fri, 19 May 2023 02:14:40 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv12 6/9] efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory Date: Fri, 19 May 2023 02:14:31 +0300 Message-Id: <20230518231434.26080-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230518231434.26080-1-kirill.shutemov@linux.intel.com> References: <20230518231434.26080-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766275726752512175?= X-GMAIL-MSGID: =?utf-8?q?1766275726752512175?= load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov Reviewed-by: Dave Hansen Reviewed-by: Ard Biesheuvel Reviewed-by: Tom Lendacky --- drivers/firmware/efi/unaccepted_memory.c | 35 ++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c index bb91c41f76fb..3d1ca60916dd 100644 --- a/drivers/firmware/efi/unaccepted_memory.c +++ b/drivers/firmware/efi/unaccepted_memory.c @@ -37,6 +37,34 @@ void accept_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * load_unaligned_zeropad() can lead to unwanted loads across page + * boundaries. The unwanted loads are typically harmless. But, they + * might be made to totally unrelated or even unmapped memory. + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now + * #VE) to recover from these unwanted loads. + * + * But, this approach does not work for unaccepted memory. For TDX, a + * load from unaccepted memory will not lead to a recoverable exception + * within the guest. The guest will exit to the VMM where the only + * recourse is to terminate the guest. + * + * There are two parts to fix this issue and comprehensively avoid + * access to unaccepted memory. Together these ensure that an extra + * "guard" page is accepted in addition to the memory that needs to be + * used: + * + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) + * checks up to end+unit_size if 'end' is aligned on a unit_size + * boundary. + * + * 2. Implicitly extend accept_memory(start, end) to end+unit_size if + * 'end' is aligned on a unit_size boundary. (immediately following + * this comment) + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE; @@ -84,6 +112,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * Also consider the unaccepted state of the *next* page. See fix #1 in + * the comment on load_unaligned_zeropad() in accept_memory(). + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE;