From patchwork Wed Sep 13 04:06:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 138748 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9ecd:0:b0:3f2:4152:657d with SMTP id t13csp1007174vqx; Wed, 13 Sep 2023 04:15:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHh20yCT3mHDGjru/MBtUvp7CILguELtD5j3UU3x8GTpfmMx+jCSixvxSsuUeDfz902fL+E X-Received: by 2002:a05:6870:4721:b0:1d4:d5a8:3d1a with SMTP id b33-20020a056870472100b001d4d5a83d1amr2359543oaq.30.1694603702662; Wed, 13 Sep 2023 04:15:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694603702; cv=none; d=google.com; s=arc-20160816; b=o568D0vsan3aHxJhmB93xtSt1vc0fBxHQNtfHZJ1ce98iqOzai/G9KITQ3vK709jno ahDYJEvLun4SFpM5T3scyxYqnx3AVzXCynhBpA+JgGWqXd7wPSqz9V3mAygHFmxDc13R zmoII51SKbZtuH/ai2kKYkmjBi7hn+bMzIk0DYZ84k3XpDRxIN6F/C8RaJdo+cQQ3l5h uv+1vrotDNMpx68zBoMQ5fuWi8lsno0xuzuzBNJdS3JpjSeussXjJNJN9xO+A/XC+HLl cq6LNk7x+1/c/xYgku1XpBwW6wK6PIXwaXsTOiN7cp4jqjn3aS1xMpGLN6aqK6rrlO5w zjIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pTVw807hsROG4J3NT/llpzTu+XlbZ07TFv0n1yN9ukk=; fh=j8PE345l5Ydlo3KwK7JeWnjqRgjiq4AteUoOZeOwa0I=; b=RNILnUKk4JHvHfvgE4P4vTXz3ypFEwxlkjVpFN3zJNjaBgVmHSffheziYDQv4ZSnkx iBrwAVVn3ec8BGmNdT26Xht/t2MpOA7P2F+J9WUDNzkKExe71TUNbGEj14s9OrD6hSPb ms3i3TycCOStyWRUSYMNRnixIJbIXBNgJxaJMrah2r/NnAgFIcIXza1ni7onajqeYAIA kdLfk1ppvpoN2wACo31DCDzF0xkyvxg+bds1di1eI9dQ6UKcC1MrstUUKE0aRupyigR1 2MchlGfU1V9F/8mpqvsIn2GBDIL9PnqZG5Mo0Q1JE1Kn94Ztw3rQWwTLr8OWkgJ2BBeI miBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Y+aA71W8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id m1-20020a656a01000000b0055379a7131csi10682841pgu.721.2023.09.13.04.15.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Sep 2023 04:15:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Y+aA71W8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id A8262824E7A7; Tue, 12 Sep 2023 21:09:30 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238436AbjIMEJV (ORCPT + 36 others); Wed, 13 Sep 2023 00:09:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238277AbjIMEI5 (ORCPT ); Wed, 13 Sep 2023 00:08:57 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB0FB173A; Tue, 12 Sep 2023 21:08:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694578132; x=1726114132; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2PsYQ63FO9gKjJV2WD5fndgBtRRQxs63BhKeT1YL4QI=; b=Y+aA71W8qFfmCh7THPvt56RUyK9R+QNcGbbLMp2LO4Z0VnazN6mcjxcJ A+rvrymFhg0cZZCom8gVEw6yfsZpty04jou8+blf2KuL9lsvdcbUt7a1z bcDJTciO7Oux8OZVUExY7tPqf5gPiFmX7Z+mwlUaorN1Ypz52nj7ZEHam Hfn/H0ROV9G4rh9MsoqG1u1PjKs3kvjqjegG3vcvH2r7Lb/8ASrUHCs9K TMuc6CIf6HiBoElOID5qSNCx7U838IFoD4GdXVCMTSlmcmIGNmnd3TS5U uDFvou0WJCnuhjSQRC23Sme3Z+LjZ+vEFv0h1myPDgd13zUBeaVeyJm9f A==; X-IronPort-AV: E=McAfee;i="6600,9927,10831"; a="357990379" X-IronPort-AV: E=Sophos;i="6.02,142,1688454000"; d="scan'208";a="357990379" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2023 21:06:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10831"; a="747155933" X-IronPort-AV: E=Sophos;i="6.02,142,1688454000"; d="scan'208";a="747155933" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by fmsmga007.fm.intel.com with ESMTP; 12 Sep 2023 21:06:44 -0700 From: Haitao Huang To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, x86@kernel.org, cgroups@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, sohil.mehta@intel.com Cc: zhiquan1.li@intel.com, kristen@linux.intel.com, seanjc@google.com, zhanb@microsoft.com, anakrish@microsoft.com, mikko.ylinen@linux.intel.com, yangjie@microsoft.com Subject: [PATCH v4 08/18] x86/sgx: Use a list to track to-be-reclaimed pages Date: Tue, 12 Sep 2023 21:06:25 -0700 Message-Id: <20230913040635.28815-9-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230913040635.28815-1-haitao.huang@linux.intel.com> References: <20230913040635.28815-1-haitao.huang@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 12 Sep 2023 21:09:30 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1776920771716772091 X-GMAIL-MSGID: 1776920771716772091 From: Kristen Carlson Accardi Change sgx_reclaim_pages() to use a list rather than an array for storing the epc_pages which will be reclaimed. This change is needed to transition to the LRU implementation for EPC cgroup support. When the EPC cgroup is implemented, the reclaiming process will do a pre-order tree walk for the subtree starting from the limit-violating cgroup. When each node is visited, candidate pages are selected from its "reclaimable" LRU list and moved into this temporary list. Passing a list from node to node for temporary storage in this walk is more straightforward than using an array. Signed-off-by: Sean Christopherson Signed-off-by: Kristen Carlson Accardi Signed-off-by: Haitao Huang Cc: Sean Christopherson --- V4: - Changes needed for patch reordering - Revised commit message V3: - Removed list wrappers --- arch/x86/kernel/cpu/sgx/main.c | 40 +++++++++++++++------------------- 1 file changed, 18 insertions(+), 22 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index c1ae19a154d0..fba06dc5abfe 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -293,12 +293,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, */ static void sgx_reclaim_pages(void) { - struct sgx_epc_page *chunk[SGX_NR_TO_SCAN]; struct sgx_backing backing[SGX_NR_TO_SCAN]; + struct sgx_epc_page *epc_page, *tmp; struct sgx_encl_page *encl_page; - struct sgx_epc_page *epc_page; pgoff_t page_index; - int cnt = 0; + LIST_HEAD(iso); int ret; int i; @@ -314,18 +313,22 @@ static void sgx_reclaim_pages(void) if (kref_get_unless_zero(&encl_page->encl->refcount) != 0) { sgx_epc_page_set_state(epc_page, SGX_EPC_PAGE_RECLAIM_IN_PROGRESS); - chunk[cnt++] = epc_page; + list_move_tail(&epc_page->list, &iso); } else { - /* The owner is freeing the page. No need to add the - * page back to the list of reclaimable pages. + /* The owner is freeing the page, remove it from the + * LRU list */ sgx_epc_page_reset_state(epc_page); + list_del_init(&epc_page->list); } } spin_unlock(&sgx_global_lru.lock); - for (i = 0; i < cnt; i++) { - epc_page = chunk[i]; + if (list_empty(&iso)) + return; + + i = 0; + list_for_each_entry_safe(epc_page, tmp, &iso, list) { encl_page = epc_page->owner; if (!sgx_reclaimer_age(epc_page)) @@ -340,6 +343,7 @@ static void sgx_reclaim_pages(void) goto skip; } + i++; encl_page->desc |= SGX_ENCL_PAGE_BEING_RECLAIMED; mutex_unlock(&encl_page->encl->lock); continue; @@ -347,27 +351,19 @@ static void sgx_reclaim_pages(void) skip: spin_lock(&sgx_global_lru.lock); sgx_epc_page_set_state(epc_page, SGX_EPC_PAGE_RECLAIMABLE); - list_add_tail(&epc_page->list, &sgx_global_lru.reclaimable); + list_move_tail(&epc_page->list, &sgx_global_lru.reclaimable); spin_unlock(&sgx_global_lru.lock); kref_put(&encl_page->encl->refcount, sgx_encl_release); - - chunk[i] = NULL; - } - - for (i = 0; i < cnt; i++) { - epc_page = chunk[i]; - if (epc_page) - sgx_reclaimer_block(epc_page); } - for (i = 0; i < cnt; i++) { - epc_page = chunk[i]; - if (!epc_page) - continue; + list_for_each_entry(epc_page, &iso, list) + sgx_reclaimer_block(epc_page); + i = 0; + list_for_each_entry_safe(epc_page, tmp, &iso, list) { encl_page = epc_page->owner; - sgx_reclaimer_write(epc_page, &backing[i]); + sgx_reclaimer_write(epc_page, &backing[i++]); kref_put(&encl_page->encl->refcount, sgx_encl_release); sgx_epc_page_reset_state(epc_page);