From patchwork Mon Oct 17 08:31:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunfeng Ye X-Patchwork-Id: 3278 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1334260wrs; Mon, 17 Oct 2022 01:35:50 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7sVM9NWUpicNwYETCf0piTsgNsK7mK8R6AveML4Erda+KUKJL0xGcqv1UjaH34qSahke3c X-Received: by 2002:a63:200e:0:b0:45b:d6ed:6c5 with SMTP id g14-20020a63200e000000b0045bd6ed06c5mr9536665pgg.121.1665995750441; Mon, 17 Oct 2022 01:35:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665995750; cv=none; d=google.com; s=arc-20160816; b=CPpM5owhjfZNdBvKlqMOO/x91tvaEziR6VMHjydeL003yYg3jNWNJoGWGGL/80egnf L5fNIxg3j8RUVs+L7XtXslkBxpudJKoSKclTfdBgMXe/2HNWcrwVlZrJySHSNTgTsqUh LxtU+JCY7OfTB4nFdPI3LYbHZoV0fFAem5oBP+lkYiaN078uYJorM8z9S4Xv3A0c/lZJ c7UzmVxV+7Wr8SXTUhlhh2hLM5BoIyY4Hnv769F2pxas/L6MTiJjYIAwtzs2jy2R98zf ZrC4uzjzwtkm3K3y1+r/pdkRaAqk+5COVZUp6YAgCd07CBEo01fCiwyVflobGyirWSvK ECWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=oboUhCzqX4KlEzTQXbL5n2gvaYFbGPQ7lnkqtgXAHyM=; b=GpMIfpZH88FNGOr/UL3OyftvHXpbaJz5uwg1nNqsQBrB+mANALhZ12X8fd9e6kiYct f5EaqQPQVyW8PWxSZ3X2yZ8LceInsGJjSz/J4VU20lqE9cy48v+CkKxE36Qz3akvxCLn 7CJsQAOpMjSsRdUwHHHKQGiKVxkJBlJUfEBdX0jUvEubLjtD9Xi58WgOObci3E95PWQN JrdX5uEWUwqAh9+h/2I2/3sfbl4HgZA06ZxFT0WI/xFPz/gplhwwmHqi4jckEiNyyEWs S9mxngCDVgJlphDwqJnV7Fg8FrAExiD2JseTQMvqP8RJ+Dwr3vfDC3c3nzBF9EGMMyYN bY0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d184-20020a6336c1000000b0043a279baf66si10748747pga.115.2022.10.17.01.35.37; Mon, 17 Oct 2022 01:35:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230281AbiJQIdG (ORCPT + 99 others); Mon, 17 Oct 2022 04:33:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230171AbiJQIcs (ORCPT ); Mon, 17 Oct 2022 04:32:48 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F4BA101DE for ; Mon, 17 Oct 2022 01:32:46 -0700 (PDT) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MrVc220fLzHvtL; Mon, 17 Oct 2022 16:32:34 +0800 (CST) Received: from huawei.com (10.44.134.232) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 17 Oct 2022 16:32:07 +0800 From: Yunfeng Ye To: , , , , , CC: Subject: [PATCH 1/5] arm64: mm: Define asid_bitmap structure for pinned_asid Date: Mon, 17 Oct 2022 16:31:59 +0800 Message-ID: <20221017083203.3690346-2-yeyunfeng@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221017083203.3690346-1-yeyunfeng@huawei.com> References: <20221017083203.3690346-1-yeyunfeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.44.134.232] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746923160058470320?= X-GMAIL-MSGID: =?utf-8?q?1746923160058470320?= It is clearer to use the asid_bitmap structure for pinned_sid, and we will use it for isolated asid later. No functional change. Signed-off-by: Yunfeng Ye --- arch/arm64/mm/context.c | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index e1e0dca01839..8549b5f30352 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -17,6 +17,12 @@ #include #include +struct asid_bitmap { + unsigned long *map; + unsigned long nr; + unsigned long max; +}; + static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); @@ -27,9 +33,7 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; -static unsigned long max_pinned_asids; -static unsigned long nr_pinned_asids; -static unsigned long *pinned_asid_map; +static struct asid_bitmap pinned_asid; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) @@ -90,8 +94,8 @@ static void set_kpti_asid_bits(unsigned long *map) static void set_reserved_asid_bits(void) { - if (pinned_asid_map) - bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS); + if (pinned_asid.map) + bitmap_copy(asid_map, pinned_asid.map, NUM_USER_ASIDS); else if (arm64_kernel_unmapped_at_el0()) set_kpti_asid_bits(asid_map); else @@ -275,7 +279,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) unsigned long flags; u64 asid; - if (!pinned_asid_map) + if (!pinned_asid.map) return 0; raw_spin_lock_irqsave(&cpu_asid_lock, flags); @@ -285,7 +289,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) if (refcount_inc_not_zero(&mm->context.pinned)) goto out_unlock; - if (nr_pinned_asids >= max_pinned_asids) { + if (pinned_asid.nr >= pinned_asid.max) { asid = 0; goto out_unlock; } @@ -299,8 +303,8 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) atomic64_set(&mm->context.id, asid); } - nr_pinned_asids++; - __set_bit(ctxid2asid(asid), pinned_asid_map); + pinned_asid.nr++; + __set_bit(ctxid2asid(asid), pinned_asid.map); refcount_set(&mm->context.pinned, 1); out_unlock: @@ -321,14 +325,14 @@ void arm64_mm_context_put(struct mm_struct *mm) unsigned long flags; u64 asid = atomic64_read(&mm->context.id); - if (!pinned_asid_map) + if (!pinned_asid.map) return; raw_spin_lock_irqsave(&cpu_asid_lock, flags); if (refcount_dec_and_test(&mm->context.pinned)) { - __clear_bit(ctxid2asid(asid), pinned_asid_map); - nr_pinned_asids--; + __clear_bit(ctxid2asid(asid), pinned_asid.map); + pinned_asid.nr--; } raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); @@ -377,8 +381,8 @@ static int asids_update_limit(void) if (arm64_kernel_unmapped_at_el0()) { num_available_asids /= 2; - if (pinned_asid_map) - set_kpti_asid_bits(pinned_asid_map); + if (pinned_asid.map) + set_kpti_asid_bits(pinned_asid.map); } /* * Expect allocation after rollover to fail if we don't have at least @@ -393,7 +397,7 @@ static int asids_update_limit(void) * even if all CPUs have a reserved ASID and the maximum number of ASIDs * are pinned, there still is at least one empty slot in the ASID map. */ - max_pinned_asids = num_available_asids - num_possible_cpus() - 2; + pinned_asid.max = num_available_asids - num_possible_cpus() - 2; return 0; } arch_initcall(asids_update_limit); @@ -407,8 +411,8 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); - pinned_asid_map = bitmap_zalloc(NUM_USER_ASIDS, GFP_KERNEL); - nr_pinned_asids = 0; + pinned_asid.map = bitmap_zalloc(NUM_USER_ASIDS, GFP_KERNEL); + pinned_asid.nr = 0; /* * We cannot call set_reserved_asid_bits() here because CPU From patchwork Mon Oct 17 08:32:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunfeng Ye X-Patchwork-Id: 3279 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1334404wrs; Mon, 17 Oct 2022 01:36:20 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5Ukp9X20feNQuOLl30SgysdnjYhXppUSdGQgCdBIA5TC0oi2X4UexeGUnebZ3Go9KfRGX6 X-Received: by 2002:a05:6402:40d6:b0:45d:636c:a12f with SMTP id z22-20020a05640240d600b0045d636ca12fmr7797776edb.233.1665995780486; Mon, 17 Oct 2022 01:36:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665995780; cv=none; d=google.com; s=arc-20160816; b=YHV4Slf9gmdVjwmvhzQNdbXWAUmHrPHEzVXSrae5SDcPqvypnD67bca7eE+KUZ73AP t6V+73lGPg3tmlWJAAVNdhfOEOnKJyFuOCgrU8EEA8DBUb/d5TZcr7X9wk4O8iZ5SdGI LjXJNfG9ViZXBPWJOAryOckAdSkoqFeJvSyNfL+C/fpejBwAA6Rk3RUB2B9xEyUjdhJE LsRW4rUO7heM89/uN/UCHEKufC+uxkbDaawtATRMmzTfbfx+fDn9vWJuI5YsW+M5NNYw PqdtbWl4tKF9yPSrG5Fyju5iUgDCxXx8uFzxVmcG1f28P7nmaCmKQukWJG+mf4zxbdaE lulQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=IBZyl6cIrjQL3L8DRwR7lMQrGHPWxKNrXtERu9B/bqU=; b=NfuYJGepIt5Ts2IVrathvhl+JJZRAwmxQlF/kEld6Oc0EiCFiA4WQUYJJ2HqQCwfiU SdbHmhrRTe9w6tSHpTX+HhaJMVo7YY4VatoP5sVEWxkUVtXDxLSky8k318T3iFkcrpXu +lAoczNjXehEFPsqALcs6J4tfnFfsShiRg8YsVixwZs1pBr3ETwOy34mrTcndndKoo/7 nvlqILJYyVf4xYQw2fWPtVFIElnW/BXlv1vJQ0ss8wAztDyKjLXKoXfjNpZOyY0VqEfD tJgHT925q/aQVqTd58e5fQWoC6k3Yn6m9M2lvk/+MgPcktxDXl3flyXKT2erGBvG0++r ajiw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l16-20020a056402255000b0045cb5be76bfsi9897297edb.361.2022.10.17.01.35.55; Mon, 17 Oct 2022 01:36:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230240AbiJQIc6 (ORCPT + 99 others); Mon, 17 Oct 2022 04:32:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230061AbiJQIcn (ORCPT ); Mon, 17 Oct 2022 04:32:43 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A0F73FEC9 for ; Mon, 17 Oct 2022 01:32:41 -0700 (PDT) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MrVc23jfWzHvRq; Mon, 17 Oct 2022 16:32:34 +0800 (CST) Received: from huawei.com (10.44.134.232) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 17 Oct 2022 16:32:07 +0800 From: Yunfeng Ye To: , , , , , CC: Subject: [PATCH 2/5] arm64: mm: Extract the processing of asid_generation Date: Mon, 17 Oct 2022 16:32:00 +0800 Message-ID: <20221017083203.3690346-3-yeyunfeng@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221017083203.3690346-1-yeyunfeng@huawei.com> References: <20221017083203.3690346-1-yeyunfeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.44.134.232] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746923191562933776?= X-GMAIL-MSGID: =?utf-8?q?1746923191562933776?= To prepare for supporting ASID isolation feature, extract the processing of asid_generation. it is convenient to modify the asid_generation centrally. By the way, It is clearer to put flush_generation() into flush_context(). Signed-off-by: Yunfeng Ye --- arch/arm64/mm/context.c | 39 ++++++++++++++++++++++++++++++++------- 1 file changed, 32 insertions(+), 7 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 8549b5f30352..380c7b05c36b 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -102,14 +102,40 @@ static void set_reserved_asid_bits(void) bitmap_clear(asid_map, 0, NUM_USER_ASIDS); } -#define asid_gen_match(asid) \ - (!(((asid) ^ atomic64_read(&asid_generation)) >> asid_bits)) +static void asid_generation_init(void) +{ + atomic64_set(&asid_generation, ASID_FIRST_VERSION); +} + +static void flush_generation(void) +{ + /* We're out of ASIDs, so increment the global generation count */ + atomic64_add_return_relaxed(ASID_FIRST_VERSION, + &asid_generation); +} + +static inline u64 asid_read_generation(void) +{ + return atomic64_read(&asid_generation); +} + +static inline bool asid_match(u64 asid, u64 genid) +{ + return (!(((asid) ^ (genid)) >> asid_bits)); +} + +static inline bool asid_gen_match(u64 asid) +{ + return asid_match(asid, asid_read_generation()); +} static void flush_context(void) { int i; u64 asid; + flush_generation(); + /* Update the list of reserved ASIDs and the ASID bitmap. */ set_reserved_asid_bits(); @@ -163,7 +189,7 @@ static u64 new_context(struct mm_struct *mm) { static u32 cur_idx = 1; u64 asid = atomic64_read(&mm->context.id); - u64 generation = atomic64_read(&asid_generation); + u64 generation = asid_read_generation(); if (asid != 0) { u64 newasid = asid2ctxid(ctxid2asid(asid), generation); @@ -202,14 +228,12 @@ static u64 new_context(struct mm_struct *mm) if (asid != NUM_USER_ASIDS) goto set_asid; - /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, - &asid_generation); flush_context(); /* We have more ASIDs than CPUs, so this will always succeed */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); + generation = asid_read_generation(); set_asid: __set_bit(asid, asid_map); cur_idx = asid; @@ -405,7 +429,8 @@ arch_initcall(asids_update_limit); static int asids_init(void) { asid_bits = get_cpu_asid_bits(); - atomic64_set(&asid_generation, ASID_FIRST_VERSION); + asid_generation_init(); + asid_map = bitmap_zalloc(NUM_USER_ASIDS, GFP_KERNEL); if (!asid_map) panic("Failed to allocate bitmap for %lu ASIDs\n", From patchwork Mon Oct 17 08:32:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunfeng Ye X-Patchwork-Id: 3277 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1334149wrs; Mon, 17 Oct 2022 01:35:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5eytE/UtlOSVZANP3oyIDLXOTjBKrov6+OvBZDgpatiAyfbFmde2F+AyqLhPZOFH4UgssG X-Received: by 2002:a17:907:7286:b0:78d:2848:bc88 with SMTP id dt6-20020a170907728600b0078d2848bc88mr7996459ejc.67.1665995731553; Mon, 17 Oct 2022 01:35:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665995731; cv=none; d=google.com; s=arc-20160816; b=wDMTPoCPuxTr4BLvLV8XtWeLpoRgnC8DZGVTfmL3/ZyIBcykNsTR8zIOCUSK3qs9r1 ts2mBVjhCuKPSOOv+Py1f+9APS2iNB5wYY4hITCN9khGl+VG96/pP085i/Rh/hcvAmGq hEje6gxKmLxYAjCYnRyFwlSFkTUCmIoDuS3Au0toTlxCXn+4ThQiVxTZNTrIaK4tfluN lyMCjTH2DpbeMateMKmwhZSjyGfLe0e/a6VWAhhekkE1qVctwhbkvgBYitHABykDAGL+ 4oduHg6wgMkLVs5q4chbVT0f/51UG2IrE4qw8Hauzu3zY/qc7pC8HS5bxp5/jC4juj0o Zm4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=fDMBw0pbuJ/I1l//fKLBVqMRJgN1OxHMQGrzpRI0ycg=; b=S+r1hufl5cCl+THSpR4/d2hdG1pE5WZGYmdumhQahPTBvw7rE0spPxpXpvMWftH8y1 6Fou7bFticOqUA7OdGTDauAcAL1YXKxKFC9zedMaTOMEGsaAzXKuWiYfzyQcwCh4dsPH URX2Tr+Dwqib6uUJmTHvlE06eRtlfpYNhzwSgkXeYgzw78JCySn6UhSwdkMfVkLEi2OZ Iz5iq+J1+90jTxwZ4utJNYfwg0InCv1CWU21acTxaf+J9/9/M0NrJIrrG1sOZxZTLoer gK5W7gItMNJaovmc4wZCw2OMfisCrOorLQTEawtxqYzJQHdCkFCHw480Nxll5hqrfVRe yKFA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ce12-20020a170906b24c00b007819684b56fsi7588729ejb.225.2022.10.17.01.35.06; Mon, 17 Oct 2022 01:35:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230145AbiJQIct (ORCPT + 99 others); Mon, 17 Oct 2022 04:32:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbiJQIcn (ORCPT ); Mon, 17 Oct 2022 04:32:43 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A3A93FED6 for ; Mon, 17 Oct 2022 01:32:41 -0700 (PDT) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MrVc23ydZzHvw7; Mon, 17 Oct 2022 16:32:34 +0800 (CST) Received: from huawei.com (10.44.134.232) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 17 Oct 2022 16:32:08 +0800 From: Yunfeng Ye To: , , , , , CC: Subject: [PATCH 3/5] arm64: mm: Use cpumask in flush_context() Date: Mon, 17 Oct 2022 16:32:01 +0800 Message-ID: <20221017083203.3690346-4-yeyunfeng@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221017083203.3690346-1-yeyunfeng@huawei.com> References: <20221017083203.3690346-1-yeyunfeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.44.134.232] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746923140607489336?= X-GMAIL-MSGID: =?utf-8?q?1746923140607489336?= Currently, all CPUs are selected to flush TLB in flush_context(). In order to prepare for flushing only part of the CPUs TLB, we use asid_housekeeping_mask and use cpumask_or() instead of cpumask_setall(). Signed-off-by: Yunfeng Ye --- arch/arm64/mm/context.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 380c7b05c36b..e402997aa1c2 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -32,6 +33,7 @@ static unsigned long *asid_map; static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; +static const struct cpumask *asid_housekeeping_mask; static struct asid_bitmap pinned_asid; @@ -129,17 +131,23 @@ static inline bool asid_gen_match(u64 asid) return asid_match(asid, asid_read_generation()); } +static const struct cpumask *flush_cpumask(void) +{ + return asid_housekeeping_mask; +} + static void flush_context(void) { int i; u64 asid; + const struct cpumask *cpumask = flush_cpumask(); flush_generation(); /* Update the list of reserved ASIDs and the ASID bitmap. */ set_reserved_asid_bits(); - for_each_possible_cpu(i) { + for_each_cpu(i, cpumask) { asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); /* * If this CPU has already been through a @@ -158,7 +166,7 @@ static void flush_context(void) * Queue a TLB invalidation for each CPU to perform on next * context-switch */ - cpumask_setall(&tlb_flush_pending); + cpumask_or(&tlb_flush_pending, &tlb_flush_pending, cpumask); } static bool check_update_reserved_asid(u64 asid, u64 newasid) @@ -439,6 +447,8 @@ static int asids_init(void) pinned_asid.map = bitmap_zalloc(NUM_USER_ASIDS, GFP_KERNEL); pinned_asid.nr = 0; + asid_housekeeping_mask = cpu_possible_mask; + /* * We cannot call set_reserved_asid_bits() here because CPU * caps are not finalized yet, so it is safer to assume KPTI From patchwork Mon Oct 17 08:32:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunfeng Ye X-Patchwork-Id: 3276 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1334127wrs; Mon, 17 Oct 2022 01:35:26 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7ZGnL0KZg//mOeHEDNbjDximZqdg69co1hLrz152Nxu/VpNec9CIN574Tjxo0RsVDR4R+k X-Received: by 2002:a17:90b:1808:b0:20d:4e7f:5f53 with SMTP id lw8-20020a17090b180800b0020d4e7f5f53mr31335520pjb.170.1665995726229; Mon, 17 Oct 2022 01:35:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665995726; cv=none; d=google.com; s=arc-20160816; b=rU/866/03P3Mu5Wb9jC3aUiBNTHrmXqZNd0JTxbUAvnFYtqrfPBMomQcP3uWhrvXdI 1OQsKJIqL0pO6F7rLN3NC5cLjql5hGA+1PngmdG4CAIsO5Og0wAINalBQdZkuo05+ZD/ iX7HuMUebtAO5viBb0Ug5sT4C+jZbFM8Gz5FhLDEOdIckigA442qY+njpcmyNttBubtf ParOEdWba5alD4FmUS+JeMAMEGRva3pov6o1JN+zPeiolCf3xIgPluFwqo4bdqYrCVMB NTdUMd5hxKlQYXNFMB0Qq7Cj3UXqKR9UWD9s9xWewAJyVMACkkOX0vezR2e5yAfuHaex uZXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+2p39uoZ6NT1iFVJEj2XqF2533WO7hhynxtf+LxfCs0=; b=lgQbcZRXugmiDzadwLVrWpl4v+CLCmEBswAvVSn/KTQYGD2gfQdFVy9jnAYbT87kpQ xQ5comiAATMR7h0NZ6E4APNEcKX41bMzwzlesz+ic/NKjqxg6RS0lfFdVY+btY5vcXyP 9ytvgahPCdQOSr4qcKI+l6GRimAFV50H+wHvT52MhzmkpxQMEBAx3/ZB47BCMXHV35iv egs9kVpR0gIuz+vnH7dZcD+4ut4KE7la4DTldxb7kKUPkecl5EMiZEAiTgRr4Lam2EvG YslCSyqQpoEb7cSbOO+3y1OcVgBh1k9LZf6mvmrr/QZPFZYUWNXZQIng1m5S4tQKUcMX yFxg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l14-20020a170903120e00b001823a7b6c7esi13111093plh.585.2022.10.17.01.35.11; Mon, 17 Oct 2022 01:35:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229786AbiJQIcy (ORCPT + 99 others); Mon, 17 Oct 2022 04:32:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230110AbiJQIcn (ORCPT ); Mon, 17 Oct 2022 04:32:43 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C5EB3FEF9 for ; Mon, 17 Oct 2022 01:32:41 -0700 (PDT) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MrVc24CRyzHvsJ; Mon, 17 Oct 2022 16:32:34 +0800 (CST) Received: from huawei.com (10.44.134.232) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 17 Oct 2022 16:32:08 +0800 From: Yunfeng Ye To: , , , , , CC: Subject: [PATCH 4/5] arm64: mm: Support ASID isolation feature Date: Mon, 17 Oct 2022 16:32:02 +0800 Message-ID: <20221017083203.3690346-5-yeyunfeng@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221017083203.3690346-1-yeyunfeng@huawei.com> References: <20221017083203.3690346-1-yeyunfeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.44.134.232] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746923134390655931?= X-GMAIL-MSGID: =?utf-8?q?1746923134390655931?= After a rollover, the global generation will be flushed, which will cause the process mm->context.id on all CPUs do not match the generation. Thus, the process will compete for the global spinlock lock to reallocate a new ASID and refresh the TLBs of all CPUs on context switch. This will lead to the increase of scheduling delay and TLB miss. In some delay-sensitive scenarios, for example, part of CPUs are isolated, only a limited number of processes are deployed to run on the isolated CPUs. In this case, we do not want these key processes to be affected by the rollover of ASID. An ASID isolation method can reduce interference. We divide the asid_generation into different domains, for example, HOUSEKEEPING and ISOLATION. Processes in different domains allocate ASID from the shared asid_map pool, then combine with the generation of local domain as the mm->context.id. After an ASID rollover, the generation of the HOUSEKEEPING domain can be flushed independently, and only the TLB of HOUSEKEEPING domain CPUs will be flushed, so the processes of ISOLATION domain will not be affected. In addition, the ASID of the ISOLATION domain is stored in the isolated_asid bitmap. When the asid_map is refreshed, the isolated_asid must be copied to the asid_map to ensure that the ASID of the ISOLATION domain is not allocated by other processes. The following figure shows the example: HOUSEKEEPING (genid: G1) ISOLATION (genid: G2) task1(G1,1) task2(G2,2) task3(G2,3) cpu0 cpu1 cpu3 cpu4 cpu5 ------------------------- ----------------------- \ / | \ / isolated_asid: [2,3] \ / asid_map: [1,2,3,4,...,65536] The task1 is running on the HOUSEKEEPING domain, it allocate ASID 1 from shared asid_map, so the context id of task1 is (G1,1). The task2 and task3 are running on the ISOLATION domain, they allocate ASID 2,3 from shared asid_map, and store ASID 2,3 to isolated_asid. the context id of task2 is (G2,2), and the context id of task3 is (G2,3). After a rollover, the generation of HOUSEKEEPING doamin is flushed, for example, it becomes to G3, then the context id of task1 is changed to (G3,1). In this time, the generation of ISOLATION domain is not affected. In some scenarios, a process has multiple threads, and different threads run in different domains, or processes migrate between different domains. But the process has only one context ID, there is a problem that how to select generation in this case. The way we're thinking is, as long as the process has run to ISOLATION doamin, select generation of ISOLATION doamin. For example: HOUSEKEEPING (genid: G1) ISOLATION (genid: G2) task1(G1,1) ====> task1(G2,1) task2(G2,2) <==== task2(G2,2) cpu0 cpu1 cpu3 cpu4 cpu5 ------------------------- ----------------------- When task1 is migrated from HOUSEKEEPING domain to ISOLATION domain, the generation G1 must be changed to G2, and save the ASID 1 to isolated_asid bitmap. But when task2 is migrated from ISOLATION domain to HOUSEKEEPING domain, it still use generation G2. In this way, we solve the problem that which generation should be selected in the scenario of process migration. As mentioned before, the generation of different domains is different. we divide the generation into two parts, the lowest bit is used as the Flag bit to indicate the HOUSEKEEPING and ISOLATION domain, and the rest bits are used as the Upper-generation. After a rollover, only the Upper-generation is flushed, the Flag part does not change in the entire life. This ensures that the genrentaion of different domains is different. asid_generation |---------------------------|-|--------| Upper-generation Flag Finally, it is important to select which domain generation and TLBs are flushed after a rollover. By default, only the HOUSEKEEPING domain is selected. When the number of ASIDs in the ISOLATION domain exceeds the max threshold, the ISOLATION domain is selected too. By default, the ASID isolation feature is disabled, and a cmdline parameter is provided to control whether the ASID isolation feature is enabled. Signed-off-by: Yunfeng Ye --- arch/arm64/mm/context.c | 203 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 183 insertions(+), 20 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index e402997aa1c2..0ea3e7485ae7 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -24,10 +25,20 @@ struct asid_bitmap { unsigned long max; }; +enum { + ASID_HOUSEKEEPING = 0, + ASID_ISOLATION = 1, + ASID_TYPE_MAX, +}; + +struct asid_domain { + atomic64_t asid_generation; +}; + static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); -static atomic64_t asid_generation; +static struct asid_domain asid_domain[ASID_TYPE_MAX]; static unsigned long *asid_map; static DEFINE_PER_CPU(atomic64_t, active_asids); @@ -36,11 +47,16 @@ static cpumask_t tlb_flush_pending; static const struct cpumask *asid_housekeeping_mask; static struct asid_bitmap pinned_asid; +static struct asid_bitmap isolated_asid; + +static int asid_isolation_cmdline; +static DEFINE_STATIC_KEY_FALSE(asid_isolation_enable); #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) -#define ASID_FIRST_VERSION (1UL << asid_bits) +#define NUM_USER_ASIDS (1UL << asid_bits) -#define NUM_USER_ASIDS ASID_FIRST_VERSION +#define ASID_ISOLATION_FLAG (NUM_USER_ASIDS) +#define ASID_FIRST_VERSION (NUM_USER_ASIDS << 1) #define ctxid2asid(asid) ((asid) & ~ASID_MASK) #define asid2ctxid(asid, genid) ((asid) | (genid)) @@ -94,6 +110,61 @@ static void set_kpti_asid_bits(unsigned long *map) memset(map, 0xaa, len); } +static inline bool is_isolated_asid(u64 asid) +{ + /* + * Note that asid 0 is not the isolated asid. The judgment + * is correct in this situation since the ASID_ISOLATION_FLAG + * bit is defined as 1 to indicate ISOLATION domain. + */ + return asid & ASID_ISOLATION_FLAG; +} + +static inline bool on_isolated_cpu(int cpu) +{ + return !cpumask_test_cpu(cpu, asid_housekeeping_mask); +} + +static inline int asid_domain_type(u64 asid, unsigned int cpu) +{ + if (on_isolated_cpu(cpu) || is_isolated_asid(asid)) + return ASID_ISOLATION; + + return ASID_HOUSEKEEPING; +} + +static inline int asid_flush_type(void) +{ + if (isolated_asid.nr > isolated_asid.max) + return ASID_ISOLATION; + else + return ASID_HOUSEKEEPING; +} + +static void asid_try_to_isolate(u64 asid) +{ + if (!static_branch_unlikely(&asid_isolation_enable)) + return; + + if (!is_isolated_asid(asid)) + return; + if (!__test_and_set_bit(ctxid2asid(asid), isolated_asid.map)) + isolated_asid.nr++; +} + +static void update_reserved_asid_bits(void) +{ + if (!static_branch_unlikely(&asid_isolation_enable)) + return; + + if (asid_flush_type() == ASID_HOUSEKEEPING) { + bitmap_or(asid_map, asid_map, isolated_asid.map, NUM_USER_ASIDS); + } else { + bitmap_zero(isolated_asid.map, NUM_USER_ASIDS); + isolated_asid.nr = 0; + } +} + static void set_reserved_asid_bits(void) { if (pinned_asid.map) @@ -102,23 +173,51 @@ static void set_reserved_asid_bits(void) set_kpti_asid_bits(asid_map); else bitmap_clear(asid_map, 0, NUM_USER_ASIDS); + + update_reserved_asid_bits(); } static void asid_generation_init(void) { - atomic64_set(&asid_generation, ASID_FIRST_VERSION); + struct asid_domain *ad; + + ad = &asid_domain[ASID_HOUSEKEEPING]; + atomic64_set(&ad->asid_generation, ASID_FIRST_VERSION); + + ad = &asid_domain[ASID_ISOLATION]; + atomic64_set(&ad->asid_generation, ASID_ISOLATION_FLAG); } static void flush_generation(void) { + struct asid_domain *ad = &asid_domain[ASID_HOUSEKEEPING]; + /* We're out of ASIDs, so increment the global generation count */ atomic64_add_return_relaxed(ASID_FIRST_VERSION, - &asid_generation); + &ad->asid_generation); + + if (asid_flush_type() == ASID_ISOLATION) { + ad = &asid_domain[ASID_ISOLATION]; + atomic64_add_return_relaxed(ASID_FIRST_VERSION, + &ad->asid_generation); + } } -static inline u64 asid_read_generation(void) +static inline u64 asid_read_generation(int type) { - return atomic64_read(&asid_generation); + struct asid_domain *ad = &asid_domain[type]; + + return atomic64_read(&ad->asid_generation); +} + +static inline u64 asid_curr_generation(u64 asid) +{ + int type = ASID_HOUSEKEEPING; + + if (static_branch_unlikely(&asid_isolation_enable)) + type = asid_domain_type(asid, smp_processor_id()); + + return asid_read_generation(type); } static inline bool asid_match(u64 asid, u64 genid) @@ -128,12 +227,28 @@ static inline bool asid_match(u64 asid, u64 genid) static inline bool asid_gen_match(u64 asid) { - return asid_match(asid, asid_read_generation()); + return asid_match(asid, asid_curr_generation(asid)); +} + +static bool asid_is_migrated(u64 asid, u64 newasid) +{ + if (!static_branch_unlikely(&asid_isolation_enable)) + return false; + + if (!is_isolated_asid(asid) && is_isolated_asid(newasid)) { + u64 generation = asid_read_generation(ASID_HOUSEKEEPING); + + return asid_match(asid, generation); + } + return false; } static const struct cpumask *flush_cpumask(void) { - return asid_housekeeping_mask; + if (asid_flush_type() == ASID_HOUSEKEEPING) + return asid_housekeeping_mask; + + return cpu_possible_mask; } static void flush_context(void) @@ -159,6 +274,7 @@ static void flush_context(void) if (asid == 0) asid = per_cpu(reserved_asids, i); __set_bit(ctxid2asid(asid), asid_map); + asid_try_to_isolate(asid); per_cpu(reserved_asids, i) = asid; } @@ -193,21 +309,23 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) return hit; } -static u64 new_context(struct mm_struct *mm) +static u64 new_context(struct mm_struct *mm, unsigned int cpu) { static u32 cur_idx = 1; u64 asid = atomic64_read(&mm->context.id); - u64 generation = asid_read_generation(); + int domain = asid_domain_type(asid, cpu); + u64 generation = asid_read_generation(domain); + u64 newasid; if (asid != 0) { - u64 newasid = asid2ctxid(ctxid2asid(asid), generation); + newasid = asid2ctxid(ctxid2asid(asid), generation); /* * If our current ASID was active during a rollover, we * can continue to use it and this was just a false alarm. */ if (check_update_reserved_asid(asid, newasid)) - return newasid; + goto out; /* * If it is pinned, we can keep using it. Note that reserved @@ -215,14 +333,21 @@ static u64 new_context(struct mm_struct *mm) * update the generation into the reserved_asids. */ if (refcount_read(&mm->context.pinned)) - return newasid; + goto out; /* * We had a valid ASID in a previous life, so try to re-use * it if possible. */ if (!__test_and_set_bit(ctxid2asid(asid), asid_map)) - return newasid; + goto out; + + /* + * We still have a valid ASID now, but the ASID is migrated from + * normal to isolated domain, we should re-use it. + */ + if (asid_is_migrated(asid, newasid)) + goto out; } /* @@ -241,11 +366,14 @@ static u64 new_context(struct mm_struct *mm) /* We have more ASIDs than CPUs, so this will always succeed */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); - generation = asid_read_generation(); + generation = asid_read_generation(domain); set_asid: __set_bit(asid, asid_map); cur_idx = asid; - return asid2ctxid(asid, generation); + newasid = asid2ctxid(asid, generation); +out: + asid_try_to_isolate(newasid); + return newasid; } void check_and_switch_context(struct mm_struct *mm) @@ -282,12 +410,12 @@ void check_and_switch_context(struct mm_struct *mm) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); + cpu = smp_processor_id(); if (!asid_gen_match(asid)) { - asid = new_context(mm); + asid = new_context(mm, cpu); atomic64_set(&mm->context.id, asid); } - cpu = smp_processor_id(); if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) local_flush_tlb_all(); @@ -327,11 +455,12 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm) } if (!asid_gen_match(asid)) { + unsigned int cpu = smp_processor_id(); /* * We went through one or more rollover since that ASID was * used. Ensure that it is still valid, or generate a new one. */ - asid = new_context(mm); + asid = new_context(mm, cpu); atomic64_set(&mm->context.id, asid); } @@ -430,10 +559,36 @@ static int asids_update_limit(void) * are pinned, there still is at least one empty slot in the ASID map. */ pinned_asid.max = num_available_asids - num_possible_cpus() - 2; + + /* + * Generally, the user does not care about the number of asids, so set + * to half of the total number as the default setting of the maximum + * threshold of the isolated asid. + */ + if (isolated_asid.map) + isolated_asid.max = num_available_asids / 2; + return 0; } arch_initcall(asids_update_limit); +static void asid_isolation_init(void) +{ + if (asid_isolation_cmdline == 0) + return; + + if (!housekeeping_enabled(HK_TYPE_DOMAIN)) + return; + + isolated_asid.map = bitmap_zalloc(NUM_USER_ASIDS, GFP_KERNEL); + if (!isolated_asid.map) + return; + + asid_housekeeping_mask = housekeeping_cpumask(HK_TYPE_DOMAIN); + static_branch_enable(&asid_isolation_enable); + pr_info("ASID Isolation enable\n"); +} + static int asids_init(void) { asid_bits = get_cpu_asid_bits(); @@ -448,6 +603,7 @@ static int asids_init(void) pinned_asid.nr = 0; asid_housekeeping_mask = cpu_possible_mask; + asid_isolation_init(); /* * We cannot call set_reserved_asid_bits() here because CPU @@ -459,3 +615,10 @@ static int asids_init(void) return 0; } early_initcall(asids_init); + +static int __init asid_isolation_setup(char *str) +{ + asid_isolation_cmdline = 1; + return 1; +} +__setup("asid_isolation", asid_isolation_setup); From patchwork Mon Oct 17 08:32:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunfeng Ye X-Patchwork-Id: 3281 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1334842wrs; Mon, 17 Oct 2022 01:37:55 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5OmnO2B7WK4nuVdwU5RW8cpRHQsqCzV9d/jJvu8O+e9XNacwz6S1rhuyuj4d9qdbMmH+Pl X-Received: by 2002:a05:6402:249f:b0:453:eb1b:1f8b with SMTP id q31-20020a056402249f00b00453eb1b1f8bmr9046875eda.235.1665995872654; Mon, 17 Oct 2022 01:37:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665995872; cv=none; d=google.com; s=arc-20160816; b=LLlmtnYwJF1oI3sY8sK7QA7i/nWACY0tibT2mhkExagW/TY1o09sIRgp8nZF1vdDMh 2z+hGmBE/V04xPItLI34Y0Za27Z2f5I8b4Sm52C65pWKvTCw6rYUznPY7Elk63gsrqnv HMY+WDxpzCGEMiw+/z69+eqeSN3xIFO5nIM6Mgtu2aJ8it1NfspLCsYanX+k1Q6Rod9o 0bEvysfdAaHERyB8YeQRl0zKGfhGA4CCMCIZAEQqb6O2CmNXJ9kGMoMFRZzLwNCut28B g6KEvqRrMuclM/gYvm4zP0UCFCb5bqSuO1qauw0eRRfna5Ej8PHcI4rvYt/rXy+weSqt zTXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=jCu+4znLSuE/OTsfOjncHEPAAfMaK6kVAKxyIDP7/H0=; b=UuC5RRFJZ0ZwuaJIeeCWHdVey4NgtyxrxHe0/IrahBk3mH/BvIkbrsWqwQJHrhJwZ1 3tTt80YZpIntLKVk6dySSC1VmJf8j8bTG5v0P1u7nvonGbEu2ey9QWMJO4AOGHGZutPf iMXiquIryDnfdgaNk6c6/6k/MAG8wxUKndv8qbULZWKDxErE17g8FcBdl5a4fkbHDuFx LHOWrzpm45aX6wtb9iq6h/AmWNhWDLc5AUmEikEmy/lxBWmS3KWMGh3QaV0Zye9649BZ 0pqda2fvCZxLwjR/Yhfix9uUiBeCQpmzGsiryIalEXfM3bSFqZn4MGkQGsXaeltJqXdG Nc/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l15-20020a170906794f00b007882926848bsi9026651ejo.818.2022.10.17.01.37.26; Mon, 17 Oct 2022 01:37:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230316AbiJQId6 (ORCPT + 99 others); Mon, 17 Oct 2022 04:33:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230371AbiJQIdi (ORCPT ); Mon, 17 Oct 2022 04:33:38 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5DD44F383 for ; Mon, 17 Oct 2022 01:33:31 -0700 (PDT) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MrVYK0JYdzpWC0; Mon, 17 Oct 2022 16:30:13 +0800 (CST) Received: from huawei.com (10.44.134.232) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 17 Oct 2022 16:32:08 +0800 From: Yunfeng Ye To: , , , , , CC: Subject: [PATCH 5/5] arm64: mm: Add TLB flush trace on context switch Date: Mon, 17 Oct 2022 16:32:03 +0800 Message-ID: <20221017083203.3690346-6-yeyunfeng@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221017083203.3690346-1-yeyunfeng@huawei.com> References: <20221017083203.3690346-1-yeyunfeng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.44.134.232] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746923288078658422?= X-GMAIL-MSGID: =?utf-8?q?1746923288078658422?= We do not know how many times the TLB is flushed on context switch. Adding trace_tlb_flush() in check_and_switch_context() may be useful. Signed-off-by: Yunfeng Ye --- arch/arm64/mm/context.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 0ea3e7485ae7..eab470a97620 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -19,6 +19,8 @@ #include #include +#include + struct asid_bitmap { unsigned long *map; unsigned long nr; @@ -60,6 +62,8 @@ static DEFINE_STATIC_KEY_FALSE(asid_isolation_enable); #define ctxid2asid(asid) ((asid) & ~ASID_MASK) #define asid2ctxid(asid, genid) ((asid) | (genid)) +#define TLB_FLUSH_ALL (-1) + /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) { @@ -416,8 +420,10 @@ void check_and_switch_context(struct mm_struct *mm) atomic64_set(&mm->context.id, asid); } - if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) + if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) { local_flush_tlb_all(); + trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL); + } atomic64_set(this_cpu_ptr(&active_asids), asid); raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);