From patchwork Wed Sep 13 04:06:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 139009 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:a8d:b0:3f2:4152:657d with SMTP id gr13csp2687vqb; Wed, 13 Sep 2023 11:27:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHC1fpY3VGESxeS5xo2q6EkExdUitxQSuZoRTVdDdHYA0TIP7zHe02mcFQKGZJK+KOINyi7 X-Received: by 2002:a05:6808:1b14:b0:3a8:ccf0:103f with SMTP id bx20-20020a0568081b1400b003a8ccf0103fmr4444063oib.3.1694629648848; Wed, 13 Sep 2023 11:27:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694629648; cv=none; d=google.com; s=arc-20160816; b=f8UiIF4G1Pt+48OIRorW9cEgkulsJYpVJWVvpLO4YD+QYNI5ycGVqYz+LwejKOpHay wpLm/Lg5AKzeU0vZ87IayHeSvKDpEi/nV48cxLsrPTmidJD/4AdWvOKiLrioZV5qitaY 8AVt7v5j4ARCGZQNYIb4ta3yL3HMYzW4A1u6qwzkMNDTLyd9CC5/T5yQCUuvZ6yQUOm1 r2sV8lue0bgiw5dMxrY9PL/rghGeqIFc5lxA5rL7rJT9clLwHI7JFlGIApJCHl29T/ZF vEdrXF2V7/sUUuOkciVcj1USd4F1o3xI2+aV4bYkZIrdjFodeAGlXycnHzP/rFNezYWG o/6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ydQbTwOgqn7hFPjrNdTmh+3sVCMziVmhVm70nRZPInk=; fh=j8PE345l5Ydlo3KwK7JeWnjqRgjiq4AteUoOZeOwa0I=; b=j4QFi8q/BXazEw09zN2fDMY1uAYPvQOXB02pnHT8eEmWf6Fi/mRa8K4fqU9H5WznhW 6gd1Wb4tFwWzKoUvVwUoxknzOrqqTSvbhPZvOFqVyeO5wXSeuaaOnkhLrWjtZ1jJjvPy mAivY+5668A6P0C/43hAGmhgN7MVX6BMovverVaCxzTrxk0pBeVChHdhLMCWLT2ihoJ1 Z8jgWZqRGUIP5epMAOfD8HbgtQJgkuVwUmPdSjd58YrSdfslqAJMwlnPaUmvtpIWnNBE xbIQfByEk52ff/raRWxudrC4Mu6j2gAseMLoaVthCMsTgQNTnbDTYrlSC9ZxJsa8yAag kuHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PGCql4l3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id bs191-20020a6328c8000000b00565f8704191si10309928pgb.635.2023.09.13.11.27.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Sep 2023 11:27:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PGCql4l3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 63F0282CFA4F; Tue, 12 Sep 2023 21:09:52 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238463AbjIMEJm (ORCPT + 36 others); Wed, 13 Sep 2023 00:09:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238292AbjIMEJH (ORCPT ); Wed, 13 Sep 2023 00:09:07 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E4951996; Tue, 12 Sep 2023 21:08:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694578139; x=1726114139; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SEKd4qb9+yWONXCVIg8bL0s2UKlCP8qBNp1Xe7W/72I=; b=PGCql4l3mHqaUryKjA0KGUtdI4ILE81PvSVEE/W8c9oxnN9JinhMj0NO bmNeEIOYcU1qQulsCG/UFdZATT9j4YrslK2qRFnA8ulet4huAuGusXW/v MEY1pKGhj+abcJb/rw3lWPhm2TO604gtxHFYY6+QZUXF2Ukg1CvKPToPz 8wIoxR1vOOLHc6Fw+JS0HfRHR5JHySmdOjjqYAPA78PvwXKuUS5zJ3WMg BN8rf0mFVcFFNPSsVPyfdXQ44+mR1XjQN/lVTZvaEWygNqRHAuZ+9uLdv rM6M08JcceugOZbUkuIKa9NEeQ71It4i8qQMOyvNJG4jjSs4CLyV1EFFE Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10831"; a="357990497" X-IronPort-AV: E=Sophos;i="6.02,142,1688454000"; d="scan'208";a="357990497" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2023 21:06:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10831"; a="747156000" X-IronPort-AV: E=Sophos;i="6.02,142,1688454000"; d="scan'208";a="747156000" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by fmsmga007.fm.intel.com with ESMTP; 12 Sep 2023 21:06:52 -0700 From: Haitao Huang To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, x86@kernel.org, cgroups@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, sohil.mehta@intel.com Cc: zhiquan1.li@intel.com, kristen@linux.intel.com, seanjc@google.com, zhanb@microsoft.com, anakrish@microsoft.com, mikko.ylinen@linux.intel.com, yangjie@microsoft.com Subject: [PATCH v4 15/18] x86/sgx: Prepare for multiple LRUs Date: Tue, 12 Sep 2023 21:06:32 -0700 Message-Id: <20230913040635.28815-16-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230913040635.28815-1-haitao.huang@linux.intel.com> References: <20230913040635.28815-1-haitao.huang@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 12 Sep 2023 21:09:52 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1776938306337813320 X-GMAIL-MSGID: 1776947978506967423 Add sgx_can_reclaim() wrapper and encapsulate direct references to the global LRU list in the reclaimer functions so that they can be called with an LRU list per EPC cgroup. Signed-off-by: Sean Christopherson Signed-off-by: Kristen Carlson Accardi Signed-off-by: Haitao Huang Cc: Sean Christopherson --- V4: - Re-organized this patch to include all changes related to encapsulation of the global LRU - Moved this patch to precede the EPC cgroup patch --- arch/x86/kernel/cpu/sgx/main.c | 41 +++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index ce316bd5e5bb..3d396fe5ec09 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -34,6 +34,16 @@ static DEFINE_XARRAY(sgx_epc_address_space); */ static struct sgx_epc_lru_lists sgx_global_lru; +static inline struct sgx_epc_lru_lists *sgx_lru_lists(struct sgx_epc_page *epc_page) +{ + return &sgx_global_lru; +} + +static inline bool sgx_can_reclaim(void) +{ + return !list_empty(&sgx_global_lru.reclaimable); +} + static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0); /* Nodes with one or more EPC sections. */ @@ -339,6 +349,7 @@ size_t sgx_reclaim_epc_pages(size_t nr_to_scan, bool ignore_age) struct sgx_backing backing[SGX_NR_TO_SCAN_MAX]; struct sgx_epc_page *epc_page, *tmp; struct sgx_encl_page *encl_page; + struct sgx_epc_lru_lists *lru; pgoff_t page_index; LIST_HEAD(iso); size_t ret; @@ -372,10 +383,11 @@ size_t sgx_reclaim_epc_pages(size_t nr_to_scan, bool ignore_age) continue; skip: - spin_lock(&sgx_global_lru.lock); + lru = sgx_lru_lists(epc_page); + spin_lock(&lru->lock); sgx_epc_page_set_state(epc_page, SGX_EPC_PAGE_RECLAIMABLE); - list_move_tail(&epc_page->list, &sgx_global_lru.reclaimable); - spin_unlock(&sgx_global_lru.lock); + list_move_tail(&epc_page->list, &lru->reclaimable); + spin_unlock(&lru->lock); kref_put(&encl_page->encl->refcount, sgx_encl_release); } @@ -399,7 +411,7 @@ size_t sgx_reclaim_epc_pages(size_t nr_to_scan, bool ignore_age) static bool sgx_should_reclaim(unsigned long watermark) { return atomic_long_read(&sgx_nr_free_pages) < watermark && - !list_empty(&sgx_global_lru.reclaimable); + sgx_can_reclaim(); } /* @@ -529,14 +541,16 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void) */ void sgx_record_epc_page(struct sgx_epc_page *page, unsigned long flags) { - spin_lock(&sgx_global_lru.lock); + struct sgx_epc_lru_lists *lru = sgx_lru_lists(page); + + spin_lock(&lru->lock); WARN_ON_ONCE(sgx_epc_page_reclaimable(page->flags)); page->flags |= flags; if (sgx_epc_page_reclaimable(flags)) - list_add_tail(&page->list, &sgx_global_lru.reclaimable); + list_add_tail(&page->list, &lru->reclaimable); else - list_add_tail(&page->list, &sgx_global_lru.unreclaimable); - spin_unlock(&sgx_global_lru.lock); + list_add_tail(&page->list, &lru->unreclaimable); + spin_unlock(&lru->lock); } /** @@ -551,15 +565,16 @@ void sgx_record_epc_page(struct sgx_epc_page *page, unsigned long flags) */ int sgx_drop_epc_page(struct sgx_epc_page *page) { - spin_lock(&sgx_global_lru.lock); + struct sgx_epc_lru_lists *lru = sgx_lru_lists(page); + + spin_lock(&lru->lock); if (sgx_epc_page_reclaim_in_progress(page->flags)) { - spin_unlock(&sgx_global_lru.lock); + spin_unlock(&lru->lock); return -EBUSY; } - list_del(&page->list); sgx_epc_page_reset_state(page); - spin_unlock(&sgx_global_lru.lock); + spin_unlock(&lru->lock); return 0; } @@ -592,7 +607,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) break; } - if (list_empty(&sgx_global_lru.reclaimable)) + if (!sgx_can_reclaim()) return ERR_PTR(-ENOMEM); if (!reclaim) {