Message ID | 20231017202505.340906-5-rick.p.edgecombe@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4381110vqb; Tue, 17 Oct 2023 13:26:34 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFHAEp6wymu1xPTMDTJpagB15zOftLZefIPq7DrP+wC4Bo7eeNkwqn6i2aBxOC+lWCr6HiQ X-Received: by 2002:a17:902:e80c:b0:1c4:1e65:1e5e with SMTP id u12-20020a170902e80c00b001c41e651e5emr3512382plg.0.1697574394644; Tue, 17 Oct 2023 13:26:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697574394; cv=none; d=google.com; s=arc-20160816; b=z71CIgyhGa8c4WlD7ftYkIHR1nygxzJ1G4qT4LgqNEMlEvlv38dOLsrYfAucKh9M/S mgUUItDGoE2OHs89WQSqPi8pIm2CYgdccBR+vKMORc4zOGYGofd8rOoe4s15yEeoMSSr mso+OXIQp+VspLTaMIGnDL6TJc/6vvEiSd1D3E1SgILAtnWKTajJ5HlgFzV4tJYKkthI rc6pmwPuzgR+QfQdUbbmoAL4ulCX4ZftHZwti5pQBi0py5Z5chMb3vZgLOaK+mYQUFBY G1P0XVcHKtxkfW1ccHgG4QKecbKD7oj9d5IJxCo3SEpIYBNK+AXVWYbhx7osXG+Umxdh w85w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2OIUddPavlDw4X1Ooc50ZlrwjQq7sSJ6/SveBJhDK64=; fh=frtIwp1xk65Umr2Bhyin7LkdduuEOGxwnK2K2LIo1HU=; b=U66+JMkyRd1DbBWzfSmIr7mgLhaKg9ElSsWqKr8bPjl6fEaiN/m2fd0F4W3Tyfaq+D ANHdfwaSFdjS+SeVtCuflSp86YqCFyiwn6Ht/OmrznBxpFu41rFfAVhUJdM4tBC1DMLd IHyvlKzCOMLW5XeJbJN6Ft6EqqG2F1vDHK4/ZPN4ttmDQAwRodhWtVWVtQwBsAZ6iRt9 ee6yS4wJzbxzDKho5fnNOmY0ZIeM8RlvsUrwUll8ckz4g2PxGH0lmnCafeFxmaNOI97v hVOyLM7Op3qoKvqFw0vo8pE9tpv7OWYaJLd25S5AJOKecfWQ4i69YPy9xiDcSAIygZoH GrhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TfJWvC4i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id q5-20020a170902dac500b001c60d1de681si788799plx.108.2023.10.17.13.26.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 13:26:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TfJWvC4i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id CAF7E80BB55C; Tue, 17 Oct 2023 13:26:25 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235016AbjJQUZq (ORCPT <rfc822;dexuan.linux@gmail.com> + 21 others); Tue, 17 Oct 2023 16:25:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344358AbjJQUZe (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 17 Oct 2023 16:25:34 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9B9CF0; Tue, 17 Oct 2023 13:25:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574333; x=1729110333; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3Ad9kgdhE7qzJyM/s24lZCoX+Aafx5OUit4Dj7g+sn8=; b=TfJWvC4iQYYjFbNMIfKU/VKXObTvIG5n5AyBser9uxYsyxd+0aZLsS3k ZRgM2B9t+bq6GLCHfGPDTxRvnqp945SqwNbH2UvPojLPG7sJ/0k1kyRr0 jBokC6Pc0EMMr3BC4Sa+tezyldQeVOclTMpK+zoILFz8+BUWLdZt2qWS4 bmamddXzSNOPmiXUYvoHBh6a9uHc4zu4TIQt7GuNKxhEaFvhu/x7YK39Y 5r08uCByUuZUTfqLW2/uSWU+wIRBhgfesrYAM2WIrSUoevCiZ71UISiyA nHtNUxfNJI+uzs0Lind/zRfdb+7Y36+7HJV351AKEDbGZk6euue8o2oGs Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429526" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429526" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040448" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040448" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:29 -0700 From: Rick Edgecombe <rick.p.edgecombe@intel.com> To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley <mikelley@microsoft.com>, thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com, Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, Robin Murphy <robin.murphy@arm.com>, iommu@lists.linux.dev Subject: [PATCH 04/10] swiotlb: Use free_decrypted_pages() Date: Tue, 17 Oct 2023 13:24:59 -0700 Message-Id: <20231017202505.340906-5-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Tue, 17 Oct 2023 13:26:25 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780035768403246716 X-GMAIL-MSGID: 1780035768403246716 |
Series |
Handle set_memory_XXcrypted() errors
|
|
Commit Message
Edgecombe, Rick P
Oct. 17, 2023, 8:24 p.m. UTC
On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to take
care to handle these errors to avoid returning decrypted (shared) memory to
the page allocator, which could lead to functional or security issues.
Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails.
Use the recently added free_decrypted_pages() to avoid this.
In swiotlb_exit(), check for set_memory_encrypted() errors manually,
because the pages are not nessarily going to the page allocator.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: iommu@lists.linux.dev
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
kernel/dma/swiotlb.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
Comments
On Tue, Oct 17, 2023 at 01:24:59PM -0700, Rick Edgecombe wrote: > On TDX it is possible for the untrusted host to cause > set_memory_encrypted() or set_memory_decrypted() to fail such that an > error is returned and the resulting memory is shared. Callers need to take > care to handle these errors to avoid returning decrypted (shared) memory to > the page allocator, which could lead to functional or security issues. > > Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails. > Use the recently added free_decrypted_pages() to avoid this. > > In swiotlb_exit(), check for set_memory_encrypted() errors manually, > because the pages are not nessarily going to the page allocator. Whatever recently introduced it didn't make it to my mailbox. Please always CC everyone on every patch in a series, everything else is impossible to review.
On Wed, 2023-10-18 at 06:43 +0200, Christoph Hellwig wrote: > On Tue, Oct 17, 2023 at 01:24:59PM -0700, Rick Edgecombe wrote: > > On TDX it is possible for the untrusted host to cause > > set_memory_encrypted() or set_memory_decrypted() to fail such that > > an > > error is returned and the resulting memory is shared. Callers need > > to take > > care to handle these errors to avoid returning decrypted (shared) > > memory to > > the page allocator, which could lead to functional or security > > issues. > > > > Swiotlb could free decrypted/shared pages if set_memory_decrypted() > > fails. > > Use the recently added free_decrypted_pages() to avoid this. > > > > In swiotlb_exit(), check for set_memory_encrypted() errors > > manually, > > because the pages are not nessarily going to the page allocator. > > Whatever recently introduced it didn't make it to my mailbox. Please > always CC everyone on every patch in a series, everything else is > impossible to review. Ok. The series touches a bunch of set_memory() callers all over, so I was trying to manage the CC list to something reasonable. I tried to give a summary in each commit, but I guess it wasn't in depth enough. Here is the lore link, if you haven't already found it: https://lore.kernel.org/lkml/20231017202505.340906-1-rick.p.edgecombe@intel.com/
On Tue, 17 Oct 2023 13:24:59 -0700 Rick Edgecombe <rick.p.edgecombe@intel.com> wrote: > On TDX it is possible for the untrusted host to cause > set_memory_encrypted() or set_memory_decrypted() to fail such that an > error is returned and the resulting memory is shared. Callers need to take > care to handle these errors to avoid returning decrypted (shared) memory to > the page allocator, which could lead to functional or security issues. > > Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails. > Use the recently added free_decrypted_pages() to avoid this. > > In swiotlb_exit(), check for set_memory_encrypted() errors manually, > because the pages are not nessarily going to the page allocator. > > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: iommu@lists.linux.dev > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > --- > kernel/dma/swiotlb.c | 11 +++++++---- > 1 file changed, 7 insertions(+), 4 deletions(-) > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index 394494a6b1f3..ad06786c4f98 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -524,6 +524,7 @@ void __init swiotlb_exit(void) > unsigned long tbl_vaddr; > size_t tbl_size, slots_size; > unsigned int area_order; > + int ret; > > if (swiotlb_force_bounce) > return; > @@ -536,17 +537,19 @@ void __init swiotlb_exit(void) > tbl_size = PAGE_ALIGN(mem->end - mem->start); > slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs)); > > - set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT); > + ret = set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT); > if (mem->late_alloc) { > area_order = get_order(array_size(sizeof(*mem->areas), > mem->nareas)); > free_pages((unsigned long)mem->areas, area_order); > - free_pages(tbl_vaddr, get_order(tbl_size)); > + if (!ret) > + free_pages(tbl_vaddr, get_order(tbl_size)); > free_pages((unsigned long)mem->slots, get_order(slots_size)); > } else { > memblock_free_late(__pa(mem->areas), > array_size(sizeof(*mem->areas), mem->nareas)); > - memblock_free_late(mem->start, tbl_size); > + if (!ret) > + memblock_free_late(mem->start, tbl_size); > memblock_free_late(__pa(mem->slots), slots_size); > } > > @@ -581,7 +584,7 @@ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes) > return page; > > error: > - __free_pages(page, order); > + free_decrypted_pages((unsigned long)vaddr, order); > return NULL; > } I admit I'm not familiar with the encryption/decryption API, but if a __free_pages() is not sufficient here, then it is quite confusing. The error label is reached only if set_memory_decrypted() returns non-zero. My naive expectation is that the memory is *not* decrypted in that case and does not require special treatment. Is this assumption wrong? OTOH I believe there is a bug in the logic. The subsequent __free_pages() in swiotlb_alloc_tlb() would have to be changed to a free_decrypted_pages(). However, I'm proposing a different approach to address the latter issue here: https://lore.kernel.org/linux-iommu/20231026095123.222-1-petrtesarik@huaweicloud.com/T/ Petr T
On Tue, 2023-10-31 at 11:43 +0100, Petr Tesařík wrote: > > I admit I'm not familiar with the encryption/decryption API, but if a > __free_pages() is not sufficient here, then it is quite confusing. > The error label is reached only if set_memory_decrypted() returns > non-zero. My naive expectation is that the memory is *not* decrypted > in > that case and does not require special treatment. Is this assumption > wrong? Yea, the memory can still be decrypted, or partially decrypted. On x86, all the set_memory() calls can fail part way through the work, and they don't rollback the changes they had made up to that point. I was thinking about trying to changes this, but that is the current behavior. But in TDX's case set_memory_decrypted() can be fully successful and just return an error. And there aren't any plans to fix the TDX case (which has special VMM work that the kernel doesn't have control over). So instead, the plan is to WARN about it and count on the caller to handle the error properly: https://lore.kernel.org/lkml/20231027214744.1742056-1-rick.p.edgecombe@intel.com/ > > OTOH I believe there is a bug in the logic. The subsequent > __free_pages() in swiotlb_alloc_tlb() would have to be changed to a > free_decrypted_pages(). However, I'm proposing a different approach > to > address the latter issue here: > > https://lore.kernel.org/linux-iommu/20231026095123.222-1-petrtesarik@huaweicloud.com/T/ Oh, yes, that makes sense. I was planning to send a patch to just leak the pages if set_memory_decrypted() fails, after my v2 linked above is accepted. It could have a different label than the phys_limit check error path added in your linked patch, so that case would still free the perfectly fine encrypted pages.
On Tue, 31 Oct 2023 15:54:52 +0000 "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> wrote: > On Tue, 2023-10-31 at 11:43 +0100, Petr Tesařík wrote: > > > > I admit I'm not familiar with the encryption/decryption API, but if a > > __free_pages() is not sufficient here, then it is quite confusing. > > The error label is reached only if set_memory_decrypted() returns > > non-zero. My naive expectation is that the memory is *not* decrypted > > in > > that case and does not require special treatment. Is this assumption > > wrong? > > Yea, the memory can still be decrypted, or partially decrypted. On x86, > all the set_memory() calls can fail part way through the work, and they > don't rollback the changes they had made up to that point. Thank you for the explanation. So, after set_memory_decrypted() fails, the pages become Schroedinger-crypted, but since its true state cannot be observed by the guest kernel, it stays as such forever. Sweet. >[...] > > OTOH I believe there is a bug in the logic. The subsequent > > __free_pages() in swiotlb_alloc_tlb() would have to be changed to a > > free_decrypted_pages(). However, I'm proposing a different approach > > to > > address the latter issue here: > > > > https://lore.kernel.org/linux-iommu/20231026095123.222-1-petrtesarik@huaweicloud.com/T/ > > Oh, yes, that makes sense. I was planning to send a patch to just leak > the pages if set_memory_decrypted() fails, after my v2 linked above is > accepted. It could have a different label than the phys_limit check > error path added in your linked patch, so that case would still free > the perfectly fine encrypted pages. Hm, should I incorporate this knowledge into a v2 of my patch and address both issues? Petr T
On Tue, 2023-10-31 at 18:13 +0100, Petr Tesařík wrote: > Thank you for the explanation. So, after set_memory_decrypted() > fails, > the pages become Schroedinger-crypted, but since its true state > cannot > be observed by the guest kernel, it stays as such forever. > > Sweet. > Yes... The untrusted host (the part of the VMM TDX is defending against) gets to specify the return code of these operations (success or failure). But the coco(a general term for TDX and similar from other vendors) threat model doesn't include DOS. So the guest should trust the return code as far as trying to not crash, but not trust it in regards to the potential to leak data. It's a bit to ask of the callers, but the other solution we discussed was to panic the guest if any weirdness is observed by the VMM, in which case the callers would never see the error. And of course panicing the kernel is Bad. So that is how we arrived at this request of the callers. Appreciate the effort to handle it on that side. > Hm, should I incorporate this knowledge into a v2 of my patch and > address both issues? That sounds good to me! Feel free to CC me if you would like, and I can scrutinize it for this particular issue.
Hi, On Tue, 31 Oct 2023 17:29:25 +0000 "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> wrote: > On Tue, 2023-10-31 at 18:13 +0100, Petr Tesařík wrote: > > Thank you for the explanation. So, after set_memory_decrypted() > > fails, > > the pages become Schroedinger-crypted, but since its true state > > cannot > > be observed by the guest kernel, it stays as such forever. > > > > Sweet. > > > Yes... The untrusted host (the part of the VMM TDX is defending > against) gets to specify the return code of these operations (success > or failure). But the coco(a general term for TDX and similar from other > vendors) threat model doesn't include DOS. So the guest should trust > the return code as far as trying to not crash, but not trust it in > regards to the potential to leak data. > > It's a bit to ask of the callers, but the other solution we discussed > was to panic the guest if any weirdness is observed by the VMM, in > which case the callers would never see the error. And of course > panicing the kernel is Bad. So that is how we arrived at this request > of the callers. Appreciate the effort to handle it on that side. > > > > Hm, should I incorporate this knowledge into a v2 of my patch and > > address both issues? > > That sounds good to me! Feel free to CC me if you would like, and I can > scrutinize it for this particular issue. I'm sorry I missed that free_decrypted_pages() is added by the very same series, so I cannot use it just yet. I can open-code it and let you convert the code to the new function. You may then also want to convert another open-coded instance further down in swiotlb_free_tlb(). In any case, there is an interdependency between the two patches, so we should agree in which order to apply them. Petr T
On Wed, 2023-11-01 at 07:27 +0100, Petr Tesařík wrote: > I'm sorry I missed that free_decrypted_pages() is added by the very > same series, so I cannot use it just yet. I can open-code it and let > you convert the code to the new function. You may then also want to > convert another open-coded instance further down in > swiotlb_free_tlb(). > > In any case, there is an interdependency between the two patches, so > we > should agree in which order to apply them. Open coding in the callers is the current plan (see an explanation after the "---" in the v1 of that patch[0] if you are interested). There might not be any interdependency between the the warning and swiotlb changes, but I can double check if you CC me. [0] https://lore.kernel.org/lkml/20231024234829.1443125-1-rick.p.edgecombe@intel.com/
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 394494a6b1f3..ad06786c4f98 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -524,6 +524,7 @@ void __init swiotlb_exit(void) unsigned long tbl_vaddr; size_t tbl_size, slots_size; unsigned int area_order; + int ret; if (swiotlb_force_bounce) return; @@ -536,17 +537,19 @@ void __init swiotlb_exit(void) tbl_size = PAGE_ALIGN(mem->end - mem->start); slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs)); - set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT); + ret = set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT); if (mem->late_alloc) { area_order = get_order(array_size(sizeof(*mem->areas), mem->nareas)); free_pages((unsigned long)mem->areas, area_order); - free_pages(tbl_vaddr, get_order(tbl_size)); + if (!ret) + free_pages(tbl_vaddr, get_order(tbl_size)); free_pages((unsigned long)mem->slots, get_order(slots_size)); } else { memblock_free_late(__pa(mem->areas), array_size(sizeof(*mem->areas), mem->nareas)); - memblock_free_late(mem->start, tbl_size); + if (!ret) + memblock_free_late(mem->start, tbl_size); memblock_free_late(__pa(mem->slots), slots_size); } @@ -581,7 +584,7 @@ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes) return page; error: - __free_pages(page, order); + free_decrypted_pages((unsigned long)vaddr, order); return NULL; }