From patchwork Tue Jul 11 02:54:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhangzekun (A)" X-Patchwork-Id: 118219 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp216932vqm; Mon, 10 Jul 2023 20:26:22 -0700 (PDT) X-Google-Smtp-Source: APBJJlF8LHcTepLrHtLbqJgoGnGx3XtSqUwcO9nWHVgS0qzChaFKChZJOkz9cGXx5ZRx33NwZHmh X-Received: by 2002:a17:906:228d:b0:98d:9655:3fac with SMTP id p13-20020a170906228d00b0098d96553facmr14779546eja.71.1689045982164; Mon, 10 Jul 2023 20:26:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689045982; cv=none; d=google.com; s=arc-20160816; b=wsl79L1oIudYZe9mMw7cHftSC5KDSQMHzGUzywVxnfYZhCjvfbWv/7eLyV5viKdqV0 om6Tpz3SB1hcxqMrLspkNhPOo4cJRBjUPlzOFIQp7/YKIVikyJK+MEokD1u6hf6ffNLM 18LoMmT11iNeFOo4gTt8PkHCLVgIiz9HTYyDYYxVhB8cJ31aYpGY8iwPezCZH10RkB5b SrEyFmkfJLyxBBUBipbNY1KydW7Ug5yEXBUvUsZPK15UR10XrCTptLeoAcmbr5GvIrpN IcmPNw4aHqVbathVcUzV4MhaL2s7Alkp/zuz3M7l+u4Gh9mR0sF+eDC7DmxekeYkoAJS ThcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Gd13WaQl8CA5eHBzKRkgAFmdrcJSk/orVJWVanWI6LM=; fh=iSjQBVrAVFI/9+Bb0JBpjDMqV7UrVmYB2fvUWhoGx5E=; b=DFxYDbUKCixiIydncCItJ6JZTO7a2lu8IKUQ4fL+7Dxf8+2ECThx5+lUrjUTOwqM4c U/wTbKizSSfdjV8Lke3rqOA71TT2DALgpWm/7gjI6/CYshLH3weTLC3Zph0qjoWKF+BM 6JMjt3Zq1vGhdARZP6ww12HMQyBIlMVvjwM/yrjVTz9U7oveiCCoSpeElJHb2G8D1gE4 ghIfqMkxRaRQUoHa+jkqwGMAxI5ZkqRjuwQSFRnpXh/iU96H+VFrBwNeQ7SIHx5BNqUS uiQye5WJtr+qgvlPMMcizZZ52IM6mfHeOnhLQpE1I5WPyT0YbqC925XrY+xEEbfz2GfE 31WA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rl18-20020a170907217200b00991c8ce54c5si1085604ejb.178.2023.07.10.20.25.57; Mon, 10 Jul 2023 20:26:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231435AbjGKDFf (ORCPT + 99 others); Mon, 10 Jul 2023 23:05:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231393AbjGKDFA (ORCPT ); Mon, 10 Jul 2023 23:05:00 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4344410F6 for ; Mon, 10 Jul 2023 20:03:46 -0700 (PDT) Received: from dggpeml500022.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R0Qf11jyzzVhlh; Tue, 11 Jul 2023 11:02:33 +0800 (CST) Received: from huawei.com (10.175.112.208) by dggpeml500022.china.huawei.com (7.185.36.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Tue, 11 Jul 2023 11:03:43 +0800 From: Zhang Zekun To: , , , , CC: , , , , Subject: [PATCH -next 2/2] iommu/iova: allocate iova_rcache->depot dynamicly Date: Tue, 11 Jul 2023 10:54:52 +0800 Message-ID: <20230711025452.35475-3-zhangzekun11@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230711025452.35475-1-zhangzekun11@huawei.com> References: <20230711025452.35475-1-zhangzekun11@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.208] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500022.china.huawei.com (7.185.36.66) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771093079542195561 X-GMAIL-MSGID: 1771093079542195561 In fio test with 4k,read,and allowed cpus to 0-255, we observe a performance decrease of IOPS. The normal IOPS can reach up to 1980k, but we can only get about 1600k. abormal IOPS: Jobs: 12 (f=12): [R(12)][99.3%][r=6220MiB/s][r=1592k IOPS][eta 00m:12s] Jobs: 12 (f=12): [R(12)][99.4%][r=6215MiB/s][r=1591k IOPS][eta 00m:11s] Jobs: 12 (f=12): [R(12)][99.4%][r=6335MiB/s][r=1622k IOPS][eta 00m:10s] Jobs: 12 (f=12): [R(12)][99.5%][r=6194MiB/s][r=1586k IOPS][eta 00m:09s] Jobs: 12 (f=12): [R(12)][99.6%][r=6173MiB/s][r=1580k IOPS][eta 00m:08s] Jobs: 12 (f=12): [R(12)][99.6%][r=5984MiB/s][r=1532k IOPS][eta 00m:07s] Jobs: 12 (f=12): [R(12)][99.7%][r=6374MiB/s][r=1632k IOPS][eta 00m:06s] Jobs: 12 (f=12): [R(12)][99.7%][r=6343MiB/s][r=1624k IOPS][eta 00m:05s] normal IOPS: Jobs: 12 (f=12): [R(12)][99.3%][r=7736MiB/s][r=1980k IOPS][eta 00m:12s] Jobs: 12 (f=12): [R(12)][99.4%][r=7744MiB/s][r=1982k IOPS][eta 00m:11s] Jobs: 12 (f=12): [R(12)][99.4%][r=7737MiB/s][r=1981k IOPS][eta 00m:10s] Jobs: 12 (f=12): [R(12)][99.5%][r=7735MiB/s][r=1980k IOPS][eta 00m:09s] Jobs: 12 (f=12): [R(12)][99.6%][r=7741MiB/s][r=1982k IOPS][eta 00m:08s] Jobs: 12 (f=12): [R(12)][99.6%][r=7740MiB/s][r=1982k IOPS][eta 00m:07s] Jobs: 12 (f=12): [R(12)][99.7%][r=7736MiB/s][r=1981k IOPS][eta 00m:06s] Jobs: 12 (f=12): [R(12)][99.7%][r=7736MiB/s][r=1980k IOPS][eta 00m:05s] The current struct of iova_rcache will have iova_cpu_rcache for every cpu, and these iova_cpu_rcaches use a common buffer iova_rcache->depot to exchange iovas among iova_cpu_rcaches. A machine with 256 cpus will have 256 iova_cpu_rcaches and 1 iova_rcache->depot per iova_domain. However, the max size of iova_rcache->depot is fixed to MAX_GLOBAL_MAGS which equals to 32, and can't grow with the number of cpus, and this can cause problem in some condition. Some drivers will only free iovas in their irq call back function. For the driver in this case it has 16 thread irqs to free iova, but these irq call back function will only free iovas on 16 certain cpus(cpu{0,16, 32...,240}). Thread irq which smp affinity is 0-15, will only free iova on cpu 0. However, the driver will alloc iova on all cpus(cpu{0-255}), cpus without free iova in local cpu_rcache need to get free iovas from iova_rcache->depot. The current size of iova_rcache->depot max size is 32, and it seems to be too small for 256 users (16 cpus will put iovas to iova_rcache->depot and 240 cpus will try to get iova from it). Set iova_rcache->depot grow with the num of possible cpus, and the decrease of IOPS disappear. Signed-off-by: Zhang Zekun --- drivers/iommu/iova.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 3c784a28e9ed..df37a4501e98 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -238,6 +238,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, static struct kmem_cache *iova_cache; static unsigned int iova_cache_users; +static unsigned int max_global_mags; static DEFINE_MUTEX(iova_cache_mutex); static struct iova *alloc_iova_mem(void) @@ -625,7 +626,6 @@ EXPORT_SYMBOL_GPL(reserve_iova); * will be wasted. */ #define IOVA_MAG_SIZE 127 -#define MAX_GLOBAL_MAGS 32 /* magazines per bin */ struct iova_magazine { unsigned long size; @@ -641,7 +641,7 @@ struct iova_cpu_rcache { struct iova_rcache { spinlock_t lock; unsigned long depot_size; - struct iova_magazine *depot[MAX_GLOBAL_MAGS]; + struct iova_magazine **depot; struct iova_cpu_rcache __percpu *cpu_rcaches; }; @@ -722,6 +722,13 @@ int iova_domain_init_rcaches(struct iova_domain *iovad) unsigned int cpu; int i, ret; + /* + * the size of max global mags should growth with the num of + * cpus + */ + if (!max_global_mags) + max_global_mags = max_t(unsigned int, 32, num_possible_cpus()); + iovad->rcaches = kcalloc(IOVA_RANGE_CACHE_MAX_SIZE, sizeof(struct iova_rcache), GFP_KERNEL); @@ -733,6 +740,12 @@ int iova_domain_init_rcaches(struct iova_domain *iovad) struct iova_rcache *rcache; rcache = &iovad->rcaches[i]; + rcache->depot = kcalloc(max_global_mags, sizeof(struct iova_magazine *), + GFP_KERNEL); + if (!rcache->depot) { + ret = -ENOMEM; + goto out_err; + } spin_lock_init(&rcache->lock); rcache->depot_size = 0; rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), @@ -798,7 +811,7 @@ static bool __iova_rcache_insert(struct iova_domain *iovad, if (new_mag) { spin_lock(&rcache->lock); - if (rcache->depot_size < MAX_GLOBAL_MAGS) { + if (rcache->depot_size < max_global_mags) { rcache->depot[rcache->depot_size++] = cpu_rcache->loaded; } else { @@ -903,8 +916,12 @@ static void free_iova_rcaches(struct iova_domain *iovad) for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) { rcache = &iovad->rcaches[i]; - if (!rcache->cpu_rcaches) + if (!rcache->depot) + break; + if (!rcache->cpu_rcaches) { + kfree(rcache->depot); break; + } for_each_possible_cpu(cpu) { cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu); if (!cpu_rcache->loaded) @@ -917,6 +934,7 @@ static void free_iova_rcaches(struct iova_domain *iovad) free_percpu(rcache->cpu_rcaches); for (j = 0; j < rcache->depot_size; ++j) iova_magazine_free(rcache->depot[j]); + kfree(rcache->depot); } kfree(iovad->rcaches);