Message ID | 20221111183532.3676646-2-kristen@linux.intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp904788wru; Fri, 11 Nov 2022 10:36:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf7k+7ubw4WNFN9l5Mc0Qwnb1m8yzpGBbMCChDMzOuZK919wHt276UG4rr3q+Brmc8xu7HvA X-Received: by 2002:a17:90a:c205:b0:20b:590:46b1 with SMTP id e5-20020a17090ac20500b0020b059046b1mr3276706pjt.83.1668191804726; Fri, 11 Nov 2022 10:36:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668191804; cv=none; d=google.com; s=arc-20160816; b=nVIBCDoNRUNEl3acFFaXx8BdLAgiog2zsgXTp11KIwVXNDcX0GLSUTJnenZP6t8EOD +rba+zqP40nVDoATK/0BeGZ7O+6kjAt6O9DVVNaDul+bL86AQHOrT4Ma/gg5rE4TWxjE /mhtx/nKOaC5PKShHoKBhC3/JMg2mmuu7PnnS43qe7EIgRmZt2IAKdQmygjgLTRYMa11 5ru4g2ReCy9WftDwFFd4dqWKxjwHNZcWQjOgFseE0+wLTrROcfzxbvOBzLBfOZimW0qk H16C/mGDWwm3C3cp8GJnBhTdExxiaX1EweHHq6rGyA7DqOHgLW5IT1nymtrG/IOVQ6WT +/lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=P2RuKkUT3QVRn5rgwPGJBDOKdHaq1PP4SazR970b/wU=; b=osTbQ88WUxmPexmQHgzOCwLwHXEVSNxA/9T76E1Cnf+TLrKJWktjFLnicigLbRw1lR I0rPQvIcqk8C5KGGCJi2eUhKi/X6JlYAfYQgvAJ5SL7cdxtpcM8s2JG0Z6xs75NMySG0 u2ZyHkTU1pyb5+29ZKIpU61BW1omEgypkvKUaqtWN5j8128XooULdW/5lea/c3zOLC23 7k4dRiNhWjO2JK7Q+3TwVQ4yDPt9TNCn9zlmd/gzOsemde6COjFU2XMTsCsZesKn/Fba aDaH/+/n7zVJbDTgafQsCstOBL3mebedHBnuXGygQ2UnLdLQ2Vm80pMzFuR5mBjxUyI4 JGbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mrhcvffr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b16-20020a170902d51000b001870c4d4378si3711238plg.498.2022.11.11.10.36.29; Fri, 11 Nov 2022 10:36:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mrhcvffr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234017AbiKKSgI (ORCPT <rfc822;winker.wchi@gmail.com> + 99 others); Fri, 11 Nov 2022 13:36:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233968AbiKKSf4 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 11 Nov 2022 13:35:56 -0500 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2FF5836A0; Fri, 11 Nov 2022 10:35:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668191740; x=1699727740; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yI9+1QYO+OyJR5ru97E8yvOHVHg/mVuCwIGgN4RVNoY=; b=mrhcvffruyMtPOpwocXnyCph0O/H0tFoV/oH/huJ2Q3LJDWKuBFQVmie Dl9umXSFd2got7cCK7GttGWfyqTOJb1+vy9gBI8o/rRCNlUHWKaCoUWxD Bj9EzUZiqqiMAMYgwnppqyxGzS7+n0YcMjiziN+srGCuO13N//O4Sp97V n36JGRkzPC2dgpnS7lPZm7R5keBkRUtflZlZwiUHAXZTa93NXMkhEZUCs W9DYlXcoRgfEgwXL8LW65i+o3b8javyJjcN4s7Lx7gOTD7CZQlQ31K77D Yh78Ly+xYwfxYNq/a7Qu8rC6/VBii84PT/fq66oW1+9pfzql96z1AKBso A==; X-IronPort-AV: E=McAfee;i="6500,9779,10528"; a="292050289" X-IronPort-AV: E=Sophos;i="5.96,157,1665471600"; d="scan'208";a="292050289" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2022 10:35:40 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10528"; a="640089159" X-IronPort-AV: E=Sophos;i="5.96,157,1665471600"; d="scan'208";a="640089159" Received: from hermesli-mobl.amr.corp.intel.com (HELO kcaccard-desk.amr.corp.intel.com) ([10.212.218.5]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2022 10:35:39 -0800 From: Kristen Carlson Accardi <kristen@linux.intel.com> To: jarkko@kernel.org, dave.hansen@linux.kernel.org, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, cgroups@vger.kernel.org, Dave Hansen <dave.hansen@linux.intel.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com> Cc: zhiquan1.li@intel.com, Kristen Carlson Accardi <kristen@linux.intel.com>, Sean Christopherson <seanjc@google.com> Subject: [PATCH 01/26] x86/sgx: Call cond_resched() at the end of sgx_reclaim_pages() Date: Fri, 11 Nov 2022 10:35:06 -0800 Message-Id: <20221111183532.3676646-2-kristen@linux.intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221111183532.3676646-1-kristen@linux.intel.com> References: <20221111183532.3676646-1-kristen@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749225889553049469?= X-GMAIL-MSGID: =?utf-8?q?1749225889553049469?= |
Series |
Add Cgroup support for SGX EPC memory
|
|
Commit Message
Kristen Carlson Accardi
Nov. 11, 2022, 6:35 p.m. UTC
From: Sean Christopherson <sean.j.christopherson@intel.com> In order to avoid repetition of cond_resched() in ksgxd() and sgx_alloc_epc_page(), move the invocation of post-reclaim cond_resched() inside sgx_reclaim_pages(). Except in the case of sgx_reclaim_direct(), sgx_reclaim_pages() is always called in a loop and is always followed by a call to cond_resched(). This will hold true for the EPC cgroup as well, which adds even more calls to sgx_reclaim_pages() and thus cond_resched(). Calls to sgx_reclaim_direct() may be performance sensitive. Allow sgx_reclaim_direct() to avoid the cond_resched() call by moving the original sgx_reclaim_pages() call to __sgx_reclaim_pages() and then have sgx_reclaim_pages() become a wrapper around that call with a cond_resched(). Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com> Cc: Sean Christopherson <seanjc@google.com> --- arch/x86/kernel/cpu/sgx/main.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
Comments
On Fri, Nov 11, 2022 at 10:35:06AM -0800, Kristen Carlson Accardi wrote: > From: Sean Christopherson <sean.j.christopherson@intel.com> > > In order to avoid repetition of cond_resched() in ksgxd() and > sgx_alloc_epc_page(), move the invocation of post-reclaim cond_resched() > inside sgx_reclaim_pages(). Except in the case of sgx_reclaim_direct(), > sgx_reclaim_pages() is always called in a loop and is always followed > by a call to cond_resched(). This will hold true for the EPC cgroup > as well, which adds even more calls to sgx_reclaim_pages() and thus > cond_resched(). Calls to sgx_reclaim_direct() may be performance > sensitive. Allow sgx_reclaim_direct() to avoid the cond_resched() > call by moving the original sgx_reclaim_pages() call to > __sgx_reclaim_pages() and then have sgx_reclaim_pages() become a > wrapper around that call with a cond_resched(). > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com> > Cc: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kernel/cpu/sgx/main.c | 17 +++++++++++------ > 1 file changed, 11 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c > index 160c8dbee0ab..ffce6fc70a1f 100644 > --- a/arch/x86/kernel/cpu/sgx/main.c > +++ b/arch/x86/kernel/cpu/sgx/main.c > @@ -287,7 +287,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, > * problematic as it would increase the lock contention too much, which would > * halt forward progress. > */ > -static void sgx_reclaim_pages(void) > +static void __sgx_reclaim_pages(void) > { > struct sgx_epc_page *chunk[SGX_NR_TO_SCAN]; > struct sgx_backing backing[SGX_NR_TO_SCAN]; > @@ -369,6 +369,12 @@ static void sgx_reclaim_pages(void) > } > } > > +static void sgx_reclaim_pages(void) > +{ > + __sgx_reclaim_pages(); > + cond_resched(); > +} > + > static bool sgx_should_reclaim(unsigned long watermark) > { > return atomic_long_read(&sgx_nr_free_pages) < watermark && > @@ -378,12 +384,14 @@ static bool sgx_should_reclaim(unsigned long watermark) > /* > * sgx_reclaim_direct() should be called (without enclave's mutex held) > * in locations where SGX memory resources might be low and might be > - * needed in order to make forward progress. > + * needed in order to make forward progress. This call to > + * __sgx_reclaim_pages() avoids the cond_resched() in sgx_reclaim_pages() > + * to improve performance. > */ > void sgx_reclaim_direct(void) > { > if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) > - sgx_reclaim_pages(); > + __sgx_reclaim_pages(); Is it a big deal to have "extra" cond_resched? > } > > static int ksgxd(void *p) > @@ -410,8 +418,6 @@ static int ksgxd(void *p) > > if (sgx_should_reclaim(SGX_NR_HIGH_PAGES)) > sgx_reclaim_pages(); > - > - cond_resched(); > } > > return 0; > @@ -582,7 +588,6 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) > } > > sgx_reclaim_pages(); > - cond_resched(); > } > > if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) > -- > 2.37.3 > BR, Jarkko
Hi Jarkko, On 11/15/2022 3:27 PM, Jarkko Sakkinen wrote: > On Fri, Nov 11, 2022 at 10:35:06AM -0800, Kristen Carlson Accardi wrote: >> From: Sean Christopherson <sean.j.christopherson@intel.com> >> >> In order to avoid repetition of cond_resched() in ksgxd() and >> sgx_alloc_epc_page(), move the invocation of post-reclaim cond_resched() >> inside sgx_reclaim_pages(). Except in the case of sgx_reclaim_direct(), >> sgx_reclaim_pages() is always called in a loop and is always followed >> by a call to cond_resched(). This will hold true for the EPC cgroup >> as well, which adds even more calls to sgx_reclaim_pages() and thus >> cond_resched(). Calls to sgx_reclaim_direct() may be performance >> sensitive. Allow sgx_reclaim_direct() to avoid the cond_resched() >> call by moving the original sgx_reclaim_pages() call to >> __sgx_reclaim_pages() and then have sgx_reclaim_pages() become a >> wrapper around that call with a cond_resched(). >> >> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> >> Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com> >> Cc: Sean Christopherson <seanjc@google.com> >> --- >> arch/x86/kernel/cpu/sgx/main.c | 17 +++++++++++------ >> 1 file changed, 11 insertions(+), 6 deletions(-) >> >> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c >> index 160c8dbee0ab..ffce6fc70a1f 100644 >> --- a/arch/x86/kernel/cpu/sgx/main.c >> +++ b/arch/x86/kernel/cpu/sgx/main.c >> @@ -287,7 +287,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, >> * problematic as it would increase the lock contention too much, which would >> * halt forward progress. >> */ >> -static void sgx_reclaim_pages(void) >> +static void __sgx_reclaim_pages(void) >> { >> struct sgx_epc_page *chunk[SGX_NR_TO_SCAN]; >> struct sgx_backing backing[SGX_NR_TO_SCAN]; >> @@ -369,6 +369,12 @@ static void sgx_reclaim_pages(void) >> } >> } >> >> +static void sgx_reclaim_pages(void) >> +{ >> + __sgx_reclaim_pages(); >> + cond_resched(); >> +} >> + >> static bool sgx_should_reclaim(unsigned long watermark) >> { >> return atomic_long_read(&sgx_nr_free_pages) < watermark && >> @@ -378,12 +384,14 @@ static bool sgx_should_reclaim(unsigned long watermark) >> /* >> * sgx_reclaim_direct() should be called (without enclave's mutex held) >> * in locations where SGX memory resources might be low and might be >> - * needed in order to make forward progress. >> + * needed in order to make forward progress. This call to >> + * __sgx_reclaim_pages() avoids the cond_resched() in sgx_reclaim_pages() >> + * to improve performance. >> */ >> void sgx_reclaim_direct(void) >> { >> if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) >> - sgx_reclaim_pages(); >> + __sgx_reclaim_pages(); > > Is it a big deal to have "extra" cond_resched? > sgx_reclaim_direct() is used to ensure that there is enough SGX memory available to make forward progress within a loop that may span a large range of pages. sgx_reclaim_direct() ensures that there is enough memory right before it depends on that available memory. I think that giving other tasks an opportunity to run in the middle is risky since these other tasks may end up consuming the SGX memory that was just freed up and thus increase likelihood of the operation to fail with user getting an EAGAIN error. Additionally, in a constrained environment where sgx_reclaim_direct() is needed to reclaim pages an additional cond_resched() could cause user visible slow down when operating on a large memory range. Reinette
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 160c8dbee0ab..ffce6fc70a1f 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -287,7 +287,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, * problematic as it would increase the lock contention too much, which would * halt forward progress. */ -static void sgx_reclaim_pages(void) +static void __sgx_reclaim_pages(void) { struct sgx_epc_page *chunk[SGX_NR_TO_SCAN]; struct sgx_backing backing[SGX_NR_TO_SCAN]; @@ -369,6 +369,12 @@ static void sgx_reclaim_pages(void) } } +static void sgx_reclaim_pages(void) +{ + __sgx_reclaim_pages(); + cond_resched(); +} + static bool sgx_should_reclaim(unsigned long watermark) { return atomic_long_read(&sgx_nr_free_pages) < watermark && @@ -378,12 +384,14 @@ static bool sgx_should_reclaim(unsigned long watermark) /* * sgx_reclaim_direct() should be called (without enclave's mutex held) * in locations where SGX memory resources might be low and might be - * needed in order to make forward progress. + * needed in order to make forward progress. This call to + * __sgx_reclaim_pages() avoids the cond_resched() in sgx_reclaim_pages() + * to improve performance. */ void sgx_reclaim_direct(void) { if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) - sgx_reclaim_pages(); + __sgx_reclaim_pages(); } static int ksgxd(void *p) @@ -410,8 +418,6 @@ static int ksgxd(void *p) if (sgx_should_reclaim(SGX_NR_HIGH_PAGES)) sgx_reclaim_pages(); - - cond_resched(); } return 0; @@ -582,7 +588,6 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) } sgx_reclaim_pages(); - cond_resched(); } if (sgx_should_reclaim(SGX_NR_LOW_PAGES))