From patchwork Wed Jul 12 23:01:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 119394 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp1471912vqm; Wed, 12 Jul 2023 16:14:38 -0700 (PDT) X-Google-Smtp-Source: APBJJlGj75G9brVJvAWxsg9Vk7dSBjA2RiLQqXk7xR6LCEs50c4RdJFEQycKyv4n6bZoeC54CxLg X-Received: by 2002:a05:6a00:24c8:b0:644:d775:60bb with SMTP id d8-20020a056a0024c800b00644d77560bbmr64991pfv.20.1689203678186; Wed, 12 Jul 2023 16:14:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689203678; cv=none; d=google.com; s=arc-20160816; b=ScI8gqFPraPvaeIzw3bE6iJtgEAe/Ys3mywnYLAiA7ffAiNjPNB0tCl6MYwFvR6ujm Kf2ORD9WwVwNl7v2FPXnSDTItnmZG/MA5rvWtDaEHf/w5zG8iVfJpHCz7v1hctXbPp3r RSUZhXcvvEWjleslPjWLxGusuS/M8ztkrq9HncRiiqWQrsUlPjCBsIZvLi92xldBPxRr aYjHJnehYgyQJVmW0DD1T0J7zJQgp1TEDa+DN9QQNLFiyVLaFOkJCxx/ppLcVPf892xO T1RBDx5/4XNKCvWbroXe5OkxTm1LKDrg8VjDnESCWhhCD+BubcFRiyJgFI49FR9CpCk6 Q2tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PvDKiwt4ENfFYv7aKidplhVvo3uS5swXpg/6iZ7zcZk=; fh=sp9Fy9uDTUYbTQiOiWOfY01qY8ktdO/xQ5V81goPrjk=; b=LXzy7SKu14+kTa1w0Y938k7AfBgI+dlRyNhsW6y4qYHVUG2EaftBTs9fJhZgI8iOQh jZTA1bstFq/TDXzKQLhFwhl4el7UkXTKsjNv5W+DKFkL6zuTBDNUO/LRI0o36P/6jt3X QLagYrtAqs8ZQJGaeVYwqQ+ToFJvjdbWLzdwGptknnZjxSwQKjHaQBTTD/X7BsP1eRRr acV1U3t8uiKyYAFwBGkcmeOYc+DNwpiy1S0MVniXSohvMzcYq7ItFUGxWSeoPIs6VFkO 66pj8ZiEgCep4plTcoHrGbG5VWRTofICBgzXZW/kY630fEgpZyd8qmF5sH99+cVMUYQ0 0dWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kdnuYNH2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s24-20020a656918000000b005429411d104si4001586pgq.897.2023.07.12.16.14.25; Wed, 12 Jul 2023 16:14:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kdnuYNH2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232902AbjGLXCZ (ORCPT + 99 others); Wed, 12 Jul 2023 19:02:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231298AbjGLXCI (ORCPT ); Wed, 12 Jul 2023 19:02:08 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2C2F10D4; Wed, 12 Jul 2023 16:02:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689202926; x=1720738926; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ndL93YXY4vK42843f/0Ki+d7gkMl28hJU78J6sdcluM=; b=kdnuYNH2O72UtA+NeyN/uIB13Gz1OY+oidh4SDqNE3vWdcJwH1B90cMt kSf0LnXdC2fK8auEltOynFQ9cCOeiMtltMVCTY2Dbg//RxXZdGD0LrGGw 60KGDQbJqzrbScicyJ3v71XDSGINzNZ1FMUgyd115IAkwJQZD5r3RK7hs i+BpeaA1KBrK12eiRo7xohSlFo0Cn7swV0OZZsWq72AmSoId/LFp8CLXm YpCmigOAVxILWTXa3nZ53HMLQWI4miGI7RtHRS23DLR9w1EaFXWza4Dd0 GKSJ4rTcmXiJy6sIse1iRYxqqThvsF5NGTreg3opIdWwMtkxYbSViF4pT A==; X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="428773890" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="428773890" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 16:02:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="835338593" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="835338593" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by fmsmga002.fm.intel.com with ESMTP; 12 Jul 2023 16:02:05 -0700 From: Haitao Huang To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, cgroups@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Cc: kai.huang@intel.com, reinette.chatre@intel.com, Kristen Carlson Accardi , zhiquan1.li@intel.com, seanjc@google.com Subject: [PATCH v3 04/28] x86/sgx: Use sgx_epc_lru_lists for existing active page list Date: Wed, 12 Jul 2023 16:01:38 -0700 Message-Id: <20230712230202.47929-5-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230712230202.47929-1-haitao.huang@linux.intel.com> References: <20230712230202.47929-1-haitao.huang@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771258435880467535 X-GMAIL-MSGID: 1771258435880467535 From: Kristen Carlson Accardi Replace the existing sgx_active_page_list and its spinlock with a global sgx_epc_lru_lists struct. Signed-off-by: Sean Christopherson Signed-off-by: Kristen Carlson Accardi Signed-off-by: Haitao Huang Cc: Sean Christopherson V3: - Remove usage of list wrapper --- arch/x86/kernel/cpu/sgx/main.c | 39 +++++++++++++++++----------------- 1 file changed, 20 insertions(+), 19 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 39939b7496b0..71c3386ccf23 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -26,10 +26,9 @@ static DEFINE_XARRAY(sgx_epc_address_space); /* * These variables are part of the state of the reclaimer, and must be accessed - * with sgx_reclaimer_lock acquired. + * with sgx_global_lru.lock acquired. */ -static LIST_HEAD(sgx_active_page_list); -static DEFINE_SPINLOCK(sgx_reclaimer_lock); +static struct sgx_epc_lru_lists sgx_global_lru; static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0); @@ -304,13 +303,13 @@ static void sgx_reclaim_pages(void) int ret; int i; - spin_lock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); for (i = 0; i < SGX_NR_TO_SCAN; i++) { - if (list_empty(&sgx_active_page_list)) + epc_page = list_first_entry_or_null(&sgx_global_lru.reclaimable, + struct sgx_epc_page, list); + if (!epc_page) break; - epc_page = list_first_entry(&sgx_active_page_list, - struct sgx_epc_page, list); list_del_init(&epc_page->list); encl_page = epc_page->encl_page; @@ -322,7 +321,7 @@ static void sgx_reclaim_pages(void) */ epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; } - spin_unlock(&sgx_reclaimer_lock); + spin_unlock(&sgx_global_lru.lock); for (i = 0; i < cnt; i++) { epc_page = chunk[i]; @@ -345,9 +344,9 @@ static void sgx_reclaim_pages(void) continue; skip: - spin_lock(&sgx_reclaimer_lock); - list_add_tail(&epc_page->list, &sgx_active_page_list); - spin_unlock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); + list_add_tail(&epc_page->list, &sgx_global_lru.reclaimable); + spin_unlock(&sgx_global_lru.lock); kref_put(&encl_page->encl->refcount, sgx_encl_release); @@ -378,7 +377,7 @@ static void sgx_reclaim_pages(void) static bool sgx_should_reclaim(unsigned long watermark) { return atomic_long_read(&sgx_nr_free_pages) < watermark && - !list_empty(&sgx_active_page_list); + !list_empty(&sgx_global_lru.reclaimable); } /* @@ -430,6 +429,8 @@ static bool __init sgx_page_reclaimer_init(void) ksgxd_tsk = tsk; + sgx_lru_init(&sgx_global_lru); + return true; } @@ -505,10 +506,10 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void) */ void sgx_mark_page_reclaimable(struct sgx_epc_page *page) { - spin_lock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); page->flags |= SGX_EPC_PAGE_RECLAIMER_TRACKED; - list_add_tail(&page->list, &sgx_active_page_list); - spin_unlock(&sgx_reclaimer_lock); + list_add_tail(&page->list, &sgx_global_lru.reclaimable); + spin_unlock(&sgx_global_lru.lock); } /** @@ -523,18 +524,18 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page) */ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page) { - spin_lock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); if (page->flags & SGX_EPC_PAGE_RECLAIMER_TRACKED) { /* The page is being reclaimed. */ if (list_empty(&page->list)) { - spin_unlock(&sgx_reclaimer_lock); + spin_unlock(&sgx_global_lru.lock); return -EBUSY; } list_del(&page->list); page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; } - spin_unlock(&sgx_reclaimer_lock); + spin_unlock(&sgx_global_lru.lock); return 0; } @@ -567,7 +568,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) break; } - if (list_empty(&sgx_active_page_list)) + if (list_empty(&sgx_global_lru.reclaimable)) return ERR_PTR(-ENOMEM); if (!reclaim) {