Message ID | 06a5f478d3bfaa57954954c82dd5d4040450171d.1666130846.git.reinette.chatre@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp19062wrs; Tue, 18 Oct 2022 15:45:39 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4z1jCvZ+6Daj8LrE5SIG1AYN+k/LvwoUilMgtMzHPvri+4VZ+ed/uxwbOAFRUA7qQrbZsx X-Received: by 2002:a17:90b:3b45:b0:20c:2eae:e70 with SMTP id ot5-20020a17090b3b4500b0020c2eae0e70mr6114300pjb.240.1666133139359; Tue, 18 Oct 2022 15:45:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666133139; cv=none; d=google.com; s=arc-20160816; b=NFqvkd+c2uTl89/J8tnX8k0wT1G5mvO7XKwFbtkauptIg3uXldq3O1tlHOhf/XajFb 2JDeHPquqNGmNArn6lMhEbMuYvQuxxIs83R88m8Ij8bSkfPlN0/n2/WXaIAr84VnO5Xn PQP9CYA0gqUlO92ZsWy5+xT1aR8C0msXTWdJs9L29pTkqdZ+Vizc+OPPXTqTxUBQpzJD rziz+s+iWwJbV1BK9+FiwLeEUQ9a5WFIpTVUNO/R+mC39mNAnH3mud5eQbLt6PG0C/nN 7lrWAN1iPrtpshEzKdRFtg685a4/uz+C5h21vVHEY1/bdBo6IQNaBRYQ7vWlqHOhBCNX 0MHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=JUdM2cJ6G1TQk7h2SmSexnIvaqUXaTpmY3AQMrh7kEM=; b=VWGR8+rKeJpHfXwfdmjnuMMXI09oK/HbuPT9iYUORaJ81NwMmWmlB08TmyY6lGzaPe hOsas7jU2apkxS4kROzG7YwIxt/sURu+aQGn0U9+QfCayTJsb2VEi8UfP1ZVd0gK9vMY QkMAw9XPWGhfNeVMjiMMgung7UTUq4FKxVB9pYny9V34XXZUIG2QUMjJtkIWdVHL4RWD PL0MpjuE6B4C2tUHqq1ekp+EIl1pyiFJuvfzY8c2B9piIcIr9Jy+PXodqgvjfsvIDLwR sb4Z25UTUXm0ZZRBcTIiM4DgH+GVOUPlLaX/K3Y200VtN5Go0gpX4mfrUt92W2ev6k5P 10nw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Thnu00hb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x12-20020a170902ec8c00b00177ee2efd41si5449332plg.265.2022.10.18.15.45.26; Tue, 18 Oct 2022 15:45:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Thnu00hb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229997AbiJRWn0 (ORCPT <rfc822;samuel.l.nystrom@gmail.com> + 99 others); Tue, 18 Oct 2022 18:43:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229787AbiJRWnX (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 18 Oct 2022 18:43:23 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC212B7EFB; Tue, 18 Oct 2022 15:43:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666133002; x=1697669002; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=zWvHJetWpppRStA6AG4OaC92ypSl7IYvhj8e+oaUAaQ=; b=Thnu00hbPhYD3IoDfZb2W00AB0MnQOhz0vFgBN3ZqOdrVYIqgX0LGJZ4 vJ9meUL4tTkzD2qoeYCn0eCT5EwwZGKgTCarzkYjRjiuGq3YfsRiXuZec 4t6QZTYvuKZv4Ne5tovgJ44yiqHxcQRSw/yJtmCl1o+scQfy33iA7/HJJ Awr7h0nJ0ALUlvUQrHk2YgPevyPuO+aeBUlfpw88fDVY/SZvV124D6TIc z9ClYv1A/LoQaVZjms33sEAtdHf8GGfgut162JWjW7pfdOfC4DHhFRcMC kO+qH4QDAr4ekol0hG1Ves2UJh+zAmUXyirCv8Q2uUzu+ewoDPWQ1MZz0 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="332798221" X-IronPort-AV: E=Sophos;i="5.95,194,1661842800"; d="scan'208";a="332798221" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 15:43:22 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="771453683" X-IronPort-AV: E=Sophos;i="5.95,194,1661842800"; d="scan'208";a="771453683" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 15:43:21 -0700 From: Reinette Chatre <reinette.chatre@intel.com> To: dave.hansen@linux.intel.com, jarkko@kernel.org, md.iqbal.hossain@intel.com, haitao.huang@intel.com, linux-sgx@vger.kernel.org Cc: linux-kernel@vger.kernel.org Subject: [PATCH] x86/sgx: Reduce delay and interference of enclave release Date: Tue, 18 Oct 2022 15:42:47 -0700 Message-Id: <06a5f478d3bfaa57954954c82dd5d4040450171d.1666130846.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747067222425552826?= X-GMAIL-MSGID: =?utf-8?q?1747067222425552826?= |
Series |
x86/sgx: Reduce delay and interference of enclave release
|
|
Commit Message
Reinette Chatre
Oct. 18, 2022, 10:42 p.m. UTC
commit 8795359e35bc ("x86/sgx: Silence softlockup detection when releasing large enclaves") introduced a cond_resched() during enclave release where the EREMOVE instruction is applied to every 4k enclave page. Giving other tasks an opportunity to run while tearing down a large enclave placates the soft lockup detector but Iqbal found that the fix causes a 25% performance degradation of a workload run using Gramine. Gramine maintains a 1:1 mapping between processes and SGX enclaves. That means if a workload in an enclave creates a subprocess then Gramine creates a duplicate enclave for that subprocess to run in. The consequence is that the release of the enclave used to run the subprocess can impact the performance of the workload that is run in the original enclave, especially in large enclaves when SGX2 is not in use. The workload run by Iqbal behaves as follows: Create enclave (enclave "A") /* Initialize workload in enclave "A" */ Create enclave (enclave "B") /* Run subprocess in enclave "B" and send result to enclave "A" */ Release enclave (enclave "B") /* Run workload in enclave "A" */ Release enclave (enclave "A") The performance impact of releasing enclave "B" in the above scenario is amplified when there is a lot of SGX memory and the enclave size matches the SGX memory. When there is 128GB SGX memory and an enclave size of 128GB, from the time enclave "B" starts the 128GB SGX memory is oversubscribed with a combined demand for 256GB from the two enclaves. Before commit 8795359e35bc ("x86/sgx: Silence softlockup detection when releasing large enclaves") enclave release was done in a tight loop without giving other tasks a chance to run. Even though the system experienced soft lockups the workload (run in enclave "A") obtained good performance numbers because when the workload started running there was no interference. Commit 8795359e35bc ("x86/sgx: Silence softlockup detection when releasing large enclaves") gave other tasks opportunity to run while an enclave is released. The impact of this in this scenario is that while enclave "B" is released and needing to access each page that belongs to it in order to run the SGX EREMOVE instruction on it, enclave "A" is attempting to run the workload needing to access the enclave pages that belong to it. This causes a lot of swapping due to the demand for the oversubscribed SGX memory. Longer latencies are experienced by the workload in enclave "A" while enclave "B" is released. Improve the performance of enclave release while still avoiding the soft lockup detector with two enhancements: - Only call cond_resched() after XA_CHECK_SCHED iterations. - Use the xarray advanced API to keep the xarray locked for XA_CHECK_SCHED iterations instead of locking and unlocking at every iteration. This batching solution is copied from sgx_encl_may_map() that also iterates through all enclave pages using this technique. With this enhancement the workload experiences a 5% performance degradation when compared to a kernel without commit 8795359e35bc ("x86/sgx: Silence softlockup detection when releasing large enclaves"), an improvement to the reported 25% degradation, while still placating the soft lockup detector. Scenarios with poor performance are still possible even with these enhancements. For example, short workloads creating sub processes while running in large enclaves. Further performance improvements are pursued in user space through avoiding to create duplicate enclaves for certain sub processes, and using SGX2 that will do lazy allocation of pages as needed so enclaves created for sub processes start quickly and release quickly. Fixes: 8795359e35bc ("x86/sgx: Silence softlockup detection when releasing large enclaves") Reported-by: Md Iqbal Hossain <md.iqbal.hossain@intel.com> Tested-by: Md Iqbal Hossain <md.iqbal.hossain@intel.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> --- I do not know if this qualifies as stable material. arch/x86/kernel/cpu/sgx/encl.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-)
Comments
On Tue, Oct 18, 2022 at 03:42:47PM -0700, Reinette Chatre wrote: > commit 8795359e35bc ("x86/sgx: Silence softlockup detection when > releasing large enclaves") introduced a cond_resched() during enclave > release where the EREMOVE instruction is applied to every 4k enclave > page. Giving other tasks an opportunity to run while tearing down a > large enclave placates the soft lockup detector but Iqbal found > that the fix causes a 25% performance degradation of a workload > run using Gramine. > > Gramine maintains a 1:1 mapping between processes and SGX enclaves. > That means if a workload in an enclave creates a subprocess then > Gramine creates a duplicate enclave for that subprocess to run in. > The consequence is that the release of the enclave used to run > the subprocess can impact the performance of the workload that is > run in the original enclave, especially in large enclaves when > SGX2 is not in use. > > The workload run by Iqbal behaves as follows: > Create enclave (enclave "A") > /* Initialize workload in enclave "A" */ > Create enclave (enclave "B") > /* Run subprocess in enclave "B" and send result to enclave "A" */ > Release enclave (enclave "B") > /* Run workload in enclave "A" */ > Release enclave (enclave "A") > > The performance impact of releasing enclave "B" in the above scenario > is amplified when there is a lot of SGX memory and the enclave size > matches the SGX memory. When there is 128GB SGX memory and an enclave > size of 128GB, from the time enclave "B" starts the 128GB SGX memory > is oversubscribed with a combined demand for 256GB from the two > enclaves. > > Before commit 8795359e35bc ("x86/sgx: Silence softlockup detection when > releasing large enclaves") enclave release was done in a tight loop > without giving other tasks a chance to run. Even though the system > experienced soft lockups the workload (run in enclave "A") obtained > good performance numbers because when the workload started running > there was no interference. > > Commit 8795359e35bc ("x86/sgx: Silence softlockup detection when > releasing large enclaves") gave other tasks opportunity to run while an > enclave is released. The impact of this in this scenario is that while > enclave "B" is released and needing to access each page that belongs > to it in order to run the SGX EREMOVE instruction on it, enclave "A" > is attempting to run the workload needing to access the enclave > pages that belong to it. This causes a lot of swapping due to the > demand for the oversubscribed SGX memory. Longer latencies are > experienced by the workload in enclave "A" while enclave "B" is > released. > > Improve the performance of enclave release while still avoiding the > soft lockup detector with two enhancements: > - Only call cond_resched() after XA_CHECK_SCHED iterations. > - Use the xarray advanced API to keep the xarray locked for > XA_CHECK_SCHED iterations instead of locking and unlocking > at every iteration. > > This batching solution is copied from sgx_encl_may_map() that > also iterates through all enclave pages using this technique. > > With this enhancement the workload experiences a 5% > performance degradation when compared to a kernel without > commit 8795359e35bc ("x86/sgx: Silence softlockup detection when > releasing large enclaves"), an improvement to the reported 25% > degradation, while still placating the soft lockup detector. > > Scenarios with poor performance are still possible even with these > enhancements. For example, short workloads creating sub processes > while running in large enclaves. Further performance improvements > are pursued in user space through avoiding to create duplicate enclaves > for certain sub processes, and using SGX2 that will do lazy allocation > of pages as needed so enclaves created for sub processes start quickly > and release quickly. > > Fixes: 8795359e35bc ("x86/sgx: Silence softlockup detection when releasing large enclaves") > Reported-by: Md Iqbal Hossain <md.iqbal.hossain@intel.com> > Tested-by: Md Iqbal Hossain <md.iqbal.hossain@intel.com> > Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> > --- > > I do not know if this qualifies as stable material. > > arch/x86/kernel/cpu/sgx/encl.c | 22 ++++++++++++++++++---- > 1 file changed, 18 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c > index 1ec20807de1e..f7365c278525 100644 > --- a/arch/x86/kernel/cpu/sgx/encl.c > +++ b/arch/x86/kernel/cpu/sgx/encl.c > @@ -682,9 +682,12 @@ void sgx_encl_release(struct kref *ref) > struct sgx_encl *encl = container_of(ref, struct sgx_encl, refcount); > struct sgx_va_page *va_page; > struct sgx_encl_page *entry; > - unsigned long index; > + unsigned long count = 0; > + > + XA_STATE(xas, &encl->page_array, PFN_DOWN(encl->base)); > > - xa_for_each(&encl->page_array, index, entry) { > + xas_lock(&xas); > + xas_for_each(&xas, entry, PFN_DOWN(encl->base + encl->size - 1)) { I would add to declarations: unsigned long nr_pages = PFN_DOWN(encl->base + encl->size - 1); Makes this more readable. > if (entry->epc_page) { > /* > * The page and its radix tree entry cannot be freed > @@ -699,9 +702,20 @@ void sgx_encl_release(struct kref *ref) > } > > kfree(entry); > - /* Invoke scheduler to prevent soft lockups. */ > - cond_resched(); > + /* > + * Invoke scheduler on every XA_CHECK_SCHED iteration > + * to prevent soft lockups. > + */ > + if (!(++count % XA_CHECK_SCHED)) { > + xas_pause(&xas); > + xas_unlock(&xas); > + > + cond_resched(); > + > + xas_lock(&xas); > + } > } WARN_ON(count != nr_pages); > + xas_unlock(&xas); > > xa_destroy(&encl->page_array); > > -- > 2.34.1 > BR, Jarkko
Hi Jarkko, On 10/23/2022 1:06 PM, Jarkko Sakkinen wrote: > On Tue, Oct 18, 2022 at 03:42:47PM -0700, Reinette Chatre wrote: ... >> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c >> index 1ec20807de1e..f7365c278525 100644 >> --- a/arch/x86/kernel/cpu/sgx/encl.c >> +++ b/arch/x86/kernel/cpu/sgx/encl.c >> @@ -682,9 +682,12 @@ void sgx_encl_release(struct kref *ref) >> struct sgx_encl *encl = container_of(ref, struct sgx_encl, refcount); >> struct sgx_va_page *va_page; >> struct sgx_encl_page *entry; >> - unsigned long index; >> + unsigned long count = 0; >> + >> + XA_STATE(xas, &encl->page_array, PFN_DOWN(encl->base)); >> >> - xa_for_each(&encl->page_array, index, entry) { >> + xas_lock(&xas); >> + xas_for_each(&xas, entry, PFN_DOWN(encl->base + encl->size - 1)) { > > I would add to declarations: > > unsigned long nr_pages = PFN_DOWN(encl->base + encl->size - 1); > > Makes this more readable. Will do, but I prefer to name it "max_page_index" or something related instead. "nr_pages" implies "number of pages" to me, which is not what PFN_DOWN(encl->base + encl->size - 1) represents. What is represented is the highest possible index of a page in page_array, where an index is the pfn of a page. > >> if (entry->epc_page) { >> /* >> * The page and its radix tree entry cannot be freed >> @@ -699,9 +702,20 @@ void sgx_encl_release(struct kref *ref) >> } >> >> kfree(entry); >> - /* Invoke scheduler to prevent soft lockups. */ >> - cond_resched(); >> + /* >> + * Invoke scheduler on every XA_CHECK_SCHED iteration >> + * to prevent soft lockups. >> + */ >> + if (!(++count % XA_CHECK_SCHED)) { >> + xas_pause(&xas); >> + xas_unlock(&xas); >> + >> + cond_resched(); >> + >> + xas_lock(&xas); >> + } >> } > > WARN_ON(count != nr_pages); > nr_pages as assigned in your example does not represent a count of the enclave pages but instead a pfn index into the page_array. Comparing it to count, the number of removed enclave pages that are not being held by reclaimer, is not appropriate. This check would be problematic even if we create a "nr_pages" from the range of possible indices. This is because of how enclave sizes are required to be power-of-two that makes it likely for there to be indices without pages associated with it. >> + xas_unlock(&xas); >> >> xa_destroy(&encl->page_array); >> >> -- >> 2.34.1 >> Reinette
On Mon, Oct 24, 2022 at 11:56:39AM -0700, Reinette Chatre wrote: > Hi Jarkko, > > On 10/23/2022 1:06 PM, Jarkko Sakkinen wrote: > > On Tue, Oct 18, 2022 at 03:42:47PM -0700, Reinette Chatre wrote: > > ... > > >> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c > >> index 1ec20807de1e..f7365c278525 100644 > >> --- a/arch/x86/kernel/cpu/sgx/encl.c > >> +++ b/arch/x86/kernel/cpu/sgx/encl.c > >> @@ -682,9 +682,12 @@ void sgx_encl_release(struct kref *ref) > >> struct sgx_encl *encl = container_of(ref, struct sgx_encl, refcount); > >> struct sgx_va_page *va_page; > >> struct sgx_encl_page *entry; > >> - unsigned long index; > >> + unsigned long count = 0; > >> + > >> + XA_STATE(xas, &encl->page_array, PFN_DOWN(encl->base)); > >> > >> - xa_for_each(&encl->page_array, index, entry) { > >> + xas_lock(&xas); > >> + xas_for_each(&xas, entry, PFN_DOWN(encl->base + encl->size - 1)) { > > > > I would add to declarations: > > > > unsigned long nr_pages = PFN_DOWN(encl->base + encl->size - 1); > > > > Makes this more readable. > > Will do, but I prefer to name it "max_page_index" or something related instead. > "nr_pages" implies "number of pages" to me, which is not what > PFN_DOWN(encl->base + encl->size - 1) represents. What is represented is the > highest possible index of a page in page_array, where an index is the > pfn of a page. Yeah, makes sense. > > > > >> if (entry->epc_page) { > >> /* > >> * The page and its radix tree entry cannot be freed > >> @@ -699,9 +702,20 @@ void sgx_encl_release(struct kref *ref) > >> } > >> > >> kfree(entry); > >> - /* Invoke scheduler to prevent soft lockups. */ > >> - cond_resched(); > >> + /* > >> + * Invoke scheduler on every XA_CHECK_SCHED iteration > >> + * to prevent soft lockups. > >> + */ > >> + if (!(++count % XA_CHECK_SCHED)) { > >> + xas_pause(&xas); > >> + xas_unlock(&xas); > >> + > >> + cond_resched(); > >> + > >> + xas_lock(&xas); > >> + } > >> } > > > > WARN_ON(count != nr_pages); > > > > nr_pages as assigned in your example does not represent a count of the > enclave pages but instead a pfn index into the page_array. Comparing it > to count, the number of removed enclave pages that are not being held > by reclaimer, is not appropriate. > > This check would be problematic even if we create a "nr_pages" from > the range of possible indices. This is because of how enclave sizes are > required to be power-of-two that makes it likely for there to be indices > without pages associated with it. Ok. > > >> + xas_unlock(&xas); > >> > >> xa_destroy(&encl->page_array); > >> > >> -- > >> 2.34.1 > >> > > Reinette BR, Jarkko
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 1ec20807de1e..f7365c278525 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -682,9 +682,12 @@ void sgx_encl_release(struct kref *ref) struct sgx_encl *encl = container_of(ref, struct sgx_encl, refcount); struct sgx_va_page *va_page; struct sgx_encl_page *entry; - unsigned long index; + unsigned long count = 0; + + XA_STATE(xas, &encl->page_array, PFN_DOWN(encl->base)); - xa_for_each(&encl->page_array, index, entry) { + xas_lock(&xas); + xas_for_each(&xas, entry, PFN_DOWN(encl->base + encl->size - 1)) { if (entry->epc_page) { /* * The page and its radix tree entry cannot be freed @@ -699,9 +702,20 @@ void sgx_encl_release(struct kref *ref) } kfree(entry); - /* Invoke scheduler to prevent soft lockups. */ - cond_resched(); + /* + * Invoke scheduler on every XA_CHECK_SCHED iteration + * to prevent soft lockups. + */ + if (!(++count % XA_CHECK_SCHED)) { + xas_pause(&xas); + xas_unlock(&xas); + + cond_resched(); + + xas_lock(&xas); + } } + xas_unlock(&xas); xa_destroy(&encl->page_array);