From patchwork Fri Dec 2 18:36:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristen Carlson Accardi X-Patchwork-Id: 29060 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1015693wrr; Fri, 2 Dec 2022 10:42:32 -0800 (PST) X-Google-Smtp-Source: AA0mqf4k7RjQaJRcVPSTqNsh/OFaiTYXP8quO3P6978xZwUg/kkjXlSqjsa2V/N8LiPx4UWMOCws X-Received: by 2002:a17:906:ce4d:b0:7be:1b8b:21fc with SMTP id se13-20020a170906ce4d00b007be1b8b21fcmr26798132ejb.666.1670006552274; Fri, 02 Dec 2022 10:42:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670006552; cv=none; d=google.com; s=arc-20160816; b=SN14yJxEvyizMEz0CP6YP2mbEOFybaPmKCa9sfUa6Lr8qf9YdpOQtuIeyXiyqbwa5n 2+F4v+ORgKqIQoYXRoNMDKPtQaj1VZb8w2RinCwJYSENaerxpO1F4MPZ0qy7DwLdJhB3 +t8RJTuc8Ia+akDv5ELtCudIFdvbVJ0sEgxahWCM3ymyvjrQ0FAd5HxDPoUqAGT/9arw UNwRbr5gTlJEePpZdQqOBI1yYGUaqkVp0+/kkv4qrZLH0evreZBc1KSwG/pP6iMSvN+f vNdbxOij4GKfzmddWMzsGad90Wxh2smnCgqlZvTHRr3MMJCI1qZEWaiwRaQ5D37IFZgo GCYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SNPu4QzJ9jAwDxQLYSy/RVjPpbsOOdPAGYxGjibH7m0=; b=o0tef2Ti7Ctbf5BDJkUWyWo1ibt8Va7c16NnCF3TD0hXbur0kFjPU3zqYE97bhlmSI 8/StW8JN9T+EULY8wkLJR0aph3siUOdees/6+jrHOXhAy/3zj9LCuZhTzVT0+UW8sTLC MMKPQMBzc8KrQk8pgg9yEleB40pYTjVV2233iLYuhbxctUSbJ9HRUCAW2xYqG0QAGz76 WV/2KRmS1OaZIFdQ5WU57U6hE9rxXfg9SXmmw6bXo3OEPikhZ4UpZBZyZ+liu//IdRQa Z+DoOFdAysNA8c+WFq6MZG/Ifvo3aD/7uGYGNPtEjPzBnX5erG3+D/qTKw1cPN8qocn7 yaHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="i2/LeCXI"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l3-20020a170906794300b007a087ccd275si7816941ejo.384.2022.12.02.10.42.08; Fri, 02 Dec 2022 10:42:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="i2/LeCXI"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234522AbiLBShU (ORCPT + 99 others); Fri, 2 Dec 2022 13:37:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234490AbiLBShL (ORCPT ); Fri, 2 Dec 2022 13:37:11 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BCEEEDD56; Fri, 2 Dec 2022 10:37:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670006231; x=1701542231; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r+7CHYhh5wuKLW8x7IX2pGjuO6Ml3uaP1xZZqkdxdwU=; b=i2/LeCXIwJoB1Fl4myXlwolIrjDhHegFAxrqOKZeo+3wrDT0lHRGAW7w vjBwUwFIoMnArnKD3mgNwS6jE1R+RV3AhlkT6SYS87KM+HtoMP9e27mxY V2TTduKeXgdhkTIQzsweBed/hc9Wt7fWIn12rFX4KZtTEzqhCS+KkdIjf 12xbC8khSNC6V3RY64tZy6CZgTRkvDe1uXQi3L28g9xaRSULB2ivVjVK/ icZgyRlph1c0Waiz4yuRgaIHymvxupyGvqT7vdXLttWZhBoF7p1/I0Mqd bKj4UL9VENCmPI/5JZXRojAbnph5YQL2zbimwoErVdU0ELNTHI8D8VgQK A==; X-IronPort-AV: E=McAfee;i="6500,9779,10549"; a="314724510" X-IronPort-AV: E=Sophos;i="5.96,213,1665471600"; d="scan'208";a="314724510" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2022 10:37:10 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10549"; a="713717386" X-IronPort-AV: E=Sophos;i="5.96,213,1665471600"; d="scan'208";a="713717386" Received: from kcaskeyx-mobl1.amr.corp.intel.com (HELO kcaccard-desk.amr.corp.intel.com) ([10.251.1.207]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2022 10:37:08 -0800 From: Kristen Carlson Accardi To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, cgroups@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Cc: zhiquan1.li@intel.com, Kristen Carlson Accardi , Sean Christopherson Subject: [PATCH v2 03/18] x86/sgx: Add 'struct sgx_epc_lru_lists' to encapsulate lru list(s) Date: Fri, 2 Dec 2022 10:36:39 -0800 Message-Id: <20221202183655.3767674-4-kristen@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221202183655.3767674-1-kristen@linux.intel.com> References: <20221202183655.3767674-1-kristen@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751128790384647936?= X-GMAIL-MSGID: =?utf-8?q?1751128790384647936?= Introduce a data structure to wrap the existing reclaimable list and its spinlock in a struct to minimize the code changes needed to handle multiple LRUs as well as reclaimable and non-reclaimable lists, both of which will be introduced and used by SGX EPC cgroups. Signed-off-by: Sean Christopherson Signed-off-by: Kristen Carlson Accardi Cc: Sean Christopherson --- arch/x86/kernel/cpu/sgx/sgx.h | 65 +++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 39cb15a8abcb..5e6d88438fae 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -90,6 +90,71 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page) return section->virt_addr + index * PAGE_SIZE; } +/* + * This data structure wraps a list of reclaimable EPC pages, and a list of + * non-reclaimable EPC pages and is used to implement a LRU policy during + * reclamation. + */ +struct sgx_epc_lru_lists { + spinlock_t lock; + struct list_head reclaimable; + struct list_head unreclaimable; +}; + +static inline void sgx_lru_init(struct sgx_epc_lru_lists *lrus) +{ + spin_lock_init(&lrus->lock); + INIT_LIST_HEAD(&lrus->reclaimable); + INIT_LIST_HEAD(&lrus->unreclaimable); +} + +/* + * Must be called with queue lock acquired + */ +static inline void __sgx_epc_page_list_push(struct list_head *list, struct sgx_epc_page *page) +{ + list_add_tail(&page->list, list); +} + +/* + * Must be called with queue lock acquired + */ +static inline struct sgx_epc_page * __sgx_epc_page_list_pop(struct list_head *list) +{ + struct sgx_epc_page *epc_page; + + if (list_empty(list)) + return NULL; + + epc_page = list_first_entry(list, struct sgx_epc_page, list); + list_del_init(&epc_page->list); + return epc_page; +} + +static inline struct sgx_epc_page * +sgx_epc_pop_reclaimable(struct sgx_epc_lru_lists *lrus) +{ + return __sgx_epc_page_list_pop(&(lrus)->reclaimable); +} + +static inline void sgx_epc_push_reclaimable(struct sgx_epc_lru_lists *lrus, + struct sgx_epc_page *page) +{ + __sgx_epc_page_list_push(&(lrus)->reclaimable, page); +} + +static inline struct sgx_epc_page * +sgx_epc_pop_unreclaimable(struct sgx_epc_lru_lists *lrus) +{ + return __sgx_epc_page_list_pop(&(lrus)->unreclaimable); +} + +static inline void sgx_epc_push_unreclaimable(struct sgx_epc_lru_lists *lrus, + struct sgx_epc_page *page) +{ + __sgx_epc_page_list_push(&(lrus)->unreclaimable, page); +} + struct sgx_epc_page *__sgx_alloc_epc_page(void); void sgx_free_epc_page(struct sgx_epc_page *page);