Message ID | 20231017202505.340906-7-rick.p.edgecombe@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4381136vqb; Tue, 17 Oct 2023 13:26:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEuPHwd0nVPNQYs3Kdwnx1DTlYihf/SoTLbBAdPnUugGUY/B9IqAh5YyEdM3qqbOf6NCw/7 X-Received: by 2002:a05:6358:72a6:b0:166:d9f6:fbbc with SMTP id w38-20020a05635872a600b00166d9f6fbbcmr3611374rwf.3.1697574401358; Tue, 17 Oct 2023 13:26:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697574401; cv=none; d=google.com; s=arc-20160816; b=tbgLrznh6WG57tl9BHW1GH+VO5WFn9joxWNkIjUINb0/8CwvasHdmTz8sBaB9MrVuj kZcecReihrEYwVhTM5wSSr20k+jvxVSNptvfebmtep9Xi+NfL416UFHaQRfFVfP+0OIb HGlz9t02qMPMPly0C1x+YDA4/8mL07t+WZSVioePs+S8hCUsFrwYfUmLMNAtN8RC2KHj Yny/s3+6evUkV6iITBpgJu/EjselHUaLENKm7SpfN+5f1I19TnFnaLwvWph7LOadPI5+ 3AKSLbT7IUXf7Okjfv8+3po7YKGEQ2rbwyYTgJRRayjRQ/BuMNZEQszcoNXcxFS78RVF xgEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=AvCyQwbXKjFfXAG0b7eB0SG9a2ztyacdff1oBiUujDQ=; fh=frtIwp1xk65Umr2Bhyin7LkdduuEOGxwnK2K2LIo1HU=; b=tNLNtGXceOiPZISaaO9fTWeORTi2mQc38Agc8Wshh6OWYEzK10UZMLEJuTAOb5M84u EyAmyngUNJjCFdUcgDmO4mKBnTjdmqucWDLkHDH206xTbgDOrcI550wvbodFkIGpSv+8 2aZhbatcMNHxOolxkFqKYGRg+F8KVAsRFsxOv6dWSQuHCMUciDyyjofA0nfpA+14fVKs FxKxXlsIsMbB18Qpoee6BGBzn4jdMf/Q7IpD1iLiZcUzC90V1CPyJ5SY+i9A+KaCSvEn atEyYG1LdpzGYzD99iStlrSi6i5YLXU2uvVocr7xsbSDbVKbEyG9pHdCzxShC6Xnpp8q n7vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=NnrPIv5f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id a73-20020a63904c000000b0057c7d7036b8si525239pge.389.2023.10.17.13.26.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 13:26:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=NnrPIv5f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 218C88025414; Tue, 17 Oct 2023 13:26:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344519AbjJQUZw (ORCPT <rfc822;dexuan.linux@gmail.com> + 21 others); Tue, 17 Oct 2023 16:25:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234700AbjJQUZg (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 17 Oct 2023 16:25:36 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AA9EF9; Tue, 17 Oct 2023 13:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574335; x=1729110335; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1KnITalfSoEUJepIi5EFi3cXKWjkojf3kZ3un2NNboE=; b=NnrPIv5fKSyMVkk67iI5ylmjTZ9cZLCTxsyMcz7w3W+yac57UxZbCmIX li8hOnnXqElYlefL66UjMDmAL9BXz9wMbkeqtzGGC0mS7hPPvtYN4KY5N NMsnGyOFV8jqqUp2kDzD6eKu8QK2Ef3jcP4SEAdct9btON3Hx8P6OvXPE BF64XV4nvllAmMWbmOaO15wilnoCWd1F/Yh+PnUd16MUUnS0GqJtk0jPT 5b4JFnscG9p8hyD5C0XCcI55dO/piasSGritenD29uYugwEY9UvAZ8P58 N6Fb9ARnOJq8sUS1/d9C1Vn4iJ7VsaiueDjwiYSRvRdS0hDmNCOv13DCm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429557" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429557" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040460" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040460" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:31 -0700 From: Rick Edgecombe <rick.p.edgecombe@intel.com> To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley <mikelley@microsoft.com>, thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, Robin Murphy <robin.murphy@arm.com>, iommu@lists.linux.dev Subject: [PATCH 06/10] dma: Use free_decrypted_pages() Date: Tue, 17 Oct 2023 13:25:01 -0700 Message-Id: <20231017202505.340906-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 17 Oct 2023 13:26:39 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780035775421842631 X-GMAIL-MSGID: 1780035775421842631 |
Series |
Handle set_memory_XXcrypted() errors
|
|
Commit Message
Edgecombe, Rick P
Oct. 17, 2023, 8:25 p.m. UTC
On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to take
care to handle these errors to avoid returning decrypted (shared) memory to
the page allocator, which could lead to functional or security issues.
DMA could free decrypted/shared pages if set_memory_decrypted() fails.
Use the recently added free_decrypted_pages() to avoid this.
Several paths also result in proper encrypted pages being freed through
the same freeing function. Rely on free_decrypted_pages() to not leak the
memory in these cases.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: iommu@lists.linux.dev
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
include/linux/dma-map-ops.h | 3 ++-
kernel/dma/contiguous.c | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
Comments
On Tue, Oct 17, 2023 at 01:25:01PM -0700, Rick Edgecombe wrote: > struct cma; > > @@ -165,7 +166,7 @@ static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size, > static inline void dma_free_contiguous(struct device *dev, struct page *page, > size_t size) > { > - __free_pages(page, get_order(size)); > + free_decrypted_pages((unsigned long)page_address(page), get_order(size)); CMA can be highmem, so this won't work totally independent of what free_decrypted_pages actually does. Also please avoid the overly long line.
Link to whole series: https://lore.kernel.org/lkml/20231017202505.340906-1-rick.p.edgecombe@intel.com/ On Wed, 2023-10-18 at 08:24 +0200, Christoph Hellwig wrote: > On Tue, Oct 17, 2023 at 01:25:01PM -0700, Rick Edgecombe wrote: > > struct cma; > > > > @@ -165,7 +166,7 @@ static inline struct page > > *dma_alloc_contiguous(struct device *dev, size_t size, > > static inline void dma_free_contiguous(struct device *dev, struct > > page *page, > > size_t size) > > { > > - __free_pages(page, get_order(size)); > > + free_decrypted_pages((unsigned long)page_address(page), > > get_order(size)); > > CMA can be highmem, so this won't work totally independent of what > free_decrypted_pages actually does. Also please avoid the overly > long line. Argh, yes this is broken for highmem. Thanks for pointing it out. For x86, we don't need to worry about doing set_memory_XXcrypted() with highmem. Checking the Kconfig logic around the other set_memory_XXcrypted() implementations: s390 - Doesn't support HIGHMEM powerpc - Doesn't support set_memory_XXcrypted() and HIGHMEM together So that would mean set_memory_encrypted() is not needed on the HIGHMEM configs (i.e. it's ok if there is no virtual mapping at free-time, because it can skip the conversion work). So free_decrypted_pages() could be changed to not disturb the HIGHMEM configs, like this: static inline void free_decrypted_pages(struct page *page, int order) { void *addr = page_address(page); int ret = 0; if (addr) ret = set_memory_encrypted(addr, 1 << order); if (ret) { WARN_ONCE(1, "Failed...\n"); return; } __free_pages(page, get_order(size)); } Or we could just fix all the callers to open code the right logic. The free_decrypted_pages() helper is not really saving code across the series, and only serving to help callers avoid re-introducing the issue. But I'm sort of worried it will be easy to do just that. Hmm...
On 2023-10-17 21:25, Rick Edgecombe wrote: > On TDX it is possible for the untrusted host to cause > set_memory_encrypted() or set_memory_decrypted() to fail such that an > error is returned and the resulting memory is shared. Callers need to take > care to handle these errors to avoid returning decrypted (shared) memory to > the page allocator, which could lead to functional or security issues. > > DMA could free decrypted/shared pages if set_memory_decrypted() fails. > Use the recently added free_decrypted_pages() to avoid this. > > Several paths also result in proper encrypted pages being freed through > the same freeing function. Rely on free_decrypted_pages() to not leak the > memory in these cases. If something's needed in the fallback path here, what about the cma_release() paths? Thanks, Robin. > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: iommu@lists.linux.dev > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > --- > include/linux/dma-map-ops.h | 3 ++- > kernel/dma/contiguous.c | 2 +- > 2 files changed, 3 insertions(+), 2 deletions(-) > > diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h > index f2fc203fb8a1..b0800cbbc357 100644 > --- a/include/linux/dma-map-ops.h > +++ b/include/linux/dma-map-ops.h > @@ -9,6 +9,7 @@ > #include <linux/dma-mapping.h> > #include <linux/pgtable.h> > #include <linux/slab.h> > +#include <linux/set_memory.h> > > struct cma; > > @@ -165,7 +166,7 @@ static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size, > static inline void dma_free_contiguous(struct device *dev, struct page *page, > size_t size) > { > - __free_pages(page, get_order(size)); > + free_decrypted_pages((unsigned long)page_address(page), get_order(size)); > } > #endif /* CONFIG_DMA_CMA*/ > > diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c > index f005c66f378c..e962f1f6434e 100644 > --- a/kernel/dma/contiguous.c > +++ b/kernel/dma/contiguous.c > @@ -429,7 +429,7 @@ void dma_free_contiguous(struct device *dev, struct page *page, size_t size) > } > > /* not in any cma, free from buddy */ > - __free_pages(page, get_order(size)); > + free_decrypted_pages((unsigned long)page_address(page), get_order(size)); > } > > /*
On Wed, 2023-10-18 at 18:42 +0100, Robin Murphy wrote: > On 2023-10-17 21:25, Rick Edgecombe wrote: > > On TDX it is possible for the untrusted host to cause > > set_memory_encrypted() or set_memory_decrypted() to fail such that > > an > > error is returned and the resulting memory is shared. Callers need > > to take > > care to handle these errors to avoid returning decrypted (shared) > > memory to > > the page allocator, which could lead to functional or security > > issues. > > > > DMA could free decrypted/shared pages if set_memory_decrypted() > > fails. > > Use the recently added free_decrypted_pages() to avoid this. > > > > Several paths also result in proper encrypted pages being freed > > through > > the same freeing function. Rely on free_decrypted_pages() to not > > leak the > > memory in these cases. > > If something's needed in the fallback path here, what about the > cma_release() paths? You mean inside cma_release(). If so, unfortunately I think it won't fit great because there are callers that are never dealing with shared memory (huge tlb). The reset-to-private operation does extra work that would be nice to avoid when possible. The cases I thought exhibited the issue were the two calls sites of dma_set_decrypted(). Playing around with it, I was thinking it might be easier to just fix those to open code leaking the pages on dma_set_decrypted() error. In which case it won't have the re-encrypt problem. It make's it less fool proof, but more efficient. And free_decrypted_pages() doesn't fit great anyway, as pointed out by Christoph.
On 2023-10-23 17:46, Edgecombe, Rick P wrote: > On Wed, 2023-10-18 at 18:42 +0100, Robin Murphy wrote: >> On 2023-10-17 21:25, Rick Edgecombe wrote: >>> On TDX it is possible for the untrusted host to cause >>> set_memory_encrypted() or set_memory_decrypted() to fail such that >>> an >>> error is returned and the resulting memory is shared. Callers need >>> to take >>> care to handle these errors to avoid returning decrypted (shared) >>> memory to >>> the page allocator, which could lead to functional or security >>> issues. >>> >>> DMA could free decrypted/shared pages if set_memory_decrypted() >>> fails. >>> Use the recently added free_decrypted_pages() to avoid this. >>> >>> Several paths also result in proper encrypted pages being freed >>> through >>> the same freeing function. Rely on free_decrypted_pages() to not >>> leak the >>> memory in these cases. >> >> If something's needed in the fallback path here, what about the >> cma_release() paths? > > You mean inside cma_release(). If so, unfortunately I think it won't > fit great because there are callers that are never dealing with shared > memory (huge tlb). The reset-to-private operation does extra work that > would be nice to avoid when possible. > > The cases I thought exhibited the issue were the two calls sites of > dma_set_decrypted(). Playing around with it, I was thinking it might be > easier to just fix those to open code leaking the pages on > dma_set_decrypted() error. In which case it won't have the re-encrypt > problem. > > It make's it less fool proof, but more efficient. And > free_decrypted_pages() doesn't fit great anyway, as pointed out by > Christoph. My point is that in dma_direct_alloc(), we get some memory either straight from the page allocator *or* from a CMA area, then call set_memory_decrypted() on it. If the problem is that set_memory_decrypted() can fail and require cleanup, then logically if that cleanup is necessary for the dma_free_contiguous()->__free_pages() call, then surely it must also be necessary for the dma_free_contiguous()->cma_release()->free_contig_range()->__free_page() calls. Thanks, Robin.
On Mon, 2023-10-23 at 18:22 +0100, Robin Murphy wrote: > > > > > > If something's needed in the fallback path here, what about the > > > cma_release() paths? > > > > You mean inside cma_release(). If so, unfortunately I think it > > won't > > fit great because there are callers that are never dealing with > > shared > > memory (huge tlb). The reset-to-private operation does extra work > > that > > would be nice to avoid when possible. > > > > The cases I thought exhibited the issue were the two calls sites of > > dma_set_decrypted(). Playing around with it, I was thinking it > > might be > > easier to just fix those to open code leaking the pages on > > dma_set_decrypted() error. In which case it won't have the re- > > encrypt > > problem. > > > > It make's it less fool proof, but more efficient. And > > free_decrypted_pages() doesn't fit great anyway, as pointed out by > > Christoph. > > My point is that in dma_direct_alloc(), we get some memory either > straight from the page allocator *or* from a CMA area, then call > set_memory_decrypted() on it. If the problem is that > set_memory_decrypted() can fail and require cleanup, then logically > if > that cleanup is necessary for the dma_free_contiguous()- > >__free_pages() > call, then surely it must also be necessary for the > dma_free_contiguous()->cma_release()->free_contig_range()- > >__free_page() > calls. Oh, I see you are saying the patch misses that case. Yes, makes sense. Sorry for the confusion. In trying to fix the callers, I waded through a lot of area's that I didn't have much expertise in and probably should have marked the whole thing RFC.
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f2fc203fb8a1..b0800cbbc357 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -9,6 +9,7 @@ #include <linux/dma-mapping.h> #include <linux/pgtable.h> #include <linux/slab.h> +#include <linux/set_memory.h> struct cma; @@ -165,7 +166,7 @@ static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size, static inline void dma_free_contiguous(struct device *dev, struct page *page, size_t size) { - __free_pages(page, get_order(size)); + free_decrypted_pages((unsigned long)page_address(page), get_order(size)); } #endif /* CONFIG_DMA_CMA*/ diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index f005c66f378c..e962f1f6434e 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -429,7 +429,7 @@ void dma_free_contiguous(struct device *dev, struct page *page, size_t size) } /* not in any cma, free from buddy */ - __free_pages(page, get_order(size)); + free_decrypted_pages((unsigned long)page_address(page), get_order(size)); } /*