Message ID | 20230323192111.1501308-1-urezki@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp3101784wrt; Thu, 23 Mar 2023 12:49:13 -0700 (PDT) X-Google-Smtp-Source: AKy350ZMG+RseWuloRgwqSSY5HKFj3yl/nFlWPwxksHXXsSpyaV+1SrDltLcBabj0hRLUqEC031g X-Received: by 2002:a17:906:2581:b0:87b:6bbb:11ac with SMTP id m1-20020a170906258100b0087b6bbb11acmr175417ejb.60.1679600953412; Thu, 23 Mar 2023 12:49:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679600953; cv=none; d=google.com; s=arc-20160816; b=X/YbSwRYA3vUxT9/RqyqjxEQOL/C1mnh9imlAZAQykhmyoLVQKpKtYEp2e2Xj1RkMK jYVte9/JQuTgwgndgZqaxagECttLN63D4RE6N4ZVAqhS1cgukAnxJuMlyFWqFCVT52uP qHpVPkZynHQMTyd1DAbrwalHnfq3upbKN0+GBhyXO2v+4OIXHFbwGIDFx/HWOjf5/n1o 80x2AhMiFAig8f32pcdwY5aC84Bwttb3WEdno4YQlJmL4ANZs6RA6Ka2TUlRHHzn5Xkn 5XoxCXF59mFWghSmnN9xLDHmjGClKzNGoP9pmoa6fyWRkIghUOPKYd99kMkJEZCiN9an TVEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=5k6nQvr+OAnndWm5Fm48bRo3kejRzJbUnDprd7Qh5FQ=; b=YB7t147aZErL6zTRYGSotKmfxbk1XFYhgwLDN1YUlRUxZjzpt0xL0rhHjzMtwJXF8h jjpwGRFeyoaqvxXkyt+sw4ejUeE218J9LK8b5qMQLY5JiSzZ6VFx3nc86PQT0fRCNYqP KOdgVBBRPYLykRjqt/cpYCO0O/ksVc8Ee0ZfBAtB7sazz5tqcUjopym1cDWHcgfl+BWT zOp/tyl+YEzlKFSnfSQ9piH42YePS1IwD0QlE9+AuaIj5tO8ZhLZOcTaPs7zqwJwlgwI pYAs1txqPfA3Nnmyhfq2iRrSa+miFAREik4Y45QE+hxFM7EeCV4UtJNK8IkUkVXfPJ8o tMiw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=IplSCuy7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gn35-20020a1709070d2300b009334f4a470esi16264378ejc.661.2023.03.23.12.48.49; Thu, 23 Mar 2023 12:49:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=IplSCuy7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231327AbjCWTVx (ORCPT <rfc822;ezelljr.billy@gmail.com> + 99 others); Thu, 23 Mar 2023 15:21:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229849AbjCWTVc (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 23 Mar 2023 15:21:32 -0400 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8CD2E188 for <linux-kernel@vger.kernel.org>; Thu, 23 Mar 2023 12:21:16 -0700 (PDT) Received: by mail-ed1-x533.google.com with SMTP id ew6so28201364edb.7 for <linux-kernel@vger.kernel.org>; Thu, 23 Mar 2023 12:21:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679599275; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=5k6nQvr+OAnndWm5Fm48bRo3kejRzJbUnDprd7Qh5FQ=; b=IplSCuy7Ci5TZrXducJfaeYesnruom2cYs74x5WP9HCfDT57vEDfBVZ+X7NyH7j79W yXbmHDrP+L9GzimyTtFrzLuk9siC/JjDqQI6Ay1X1Y6M+mODmylo+0qejtICjEFW6nk7 VUBvyRC3fpFt5b3Jd705/z+DVOmq5LPVsjOo+/PK3QAn+nw7uwdIAdd9TnzNDGmwcr9S MEkRfubcyn9zr2ZuWBWRRaqOzXo6UlDfhd8QQeENDVsy+5AggNIxWAMwyTSwMhETLnen 9vc3NYu/hSohZV4N7Hifc6fv4Ap+t5/o44UV1Yw+D8k6CH0yBtJEaVAIC14F2vsp71FF H70A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679599275; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5k6nQvr+OAnndWm5Fm48bRo3kejRzJbUnDprd7Qh5FQ=; b=4F8tjgTsZu5d6nEq7ucU/kZFBOTfRqdvF/uOy7DL2iDwponUfNICWusMQcf4JRYle8 QcMNA8KHym4kv2uPTPXOZMG+KTr75rq65GsXhftT+cfiKnb+ZHEYBGR19EKUWV5naMfs as8HXn/M+QsLsiKxklOUO3UBwb2dTIuUQaJYjrZr6ESmK0atADaMLE+tZK8e7PSxrHqg UTULmrYJk7MwZIvwoMbyuTeK9ztRch4rkbUEdN27cdLVhKk17sbStO46LviC+bS70JXS 7ISjcv3nI31Gjd4cI5ziGFCXmOosIlM+g7+wW1ukMQSgHg0Jq0bDLWnSCsLx2RUJTJ0u DTlA== X-Gm-Message-State: AAQBX9clPNTm/MFxepkDYPp0Q3CrDSvrMYFiACKjfp7GKb1QBMtyVICe IPxeK/+4nUwrbfXdv12gCeo= X-Received: by 2002:a17:907:248c:b0:931:bbcf:eb6b with SMTP id zg12-20020a170907248c00b00931bbcfeb6bmr114330ejb.63.1679599274770; Thu, 23 Mar 2023 12:21:14 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id lz24-20020a170906fb1800b009334309eda5sm7213127ejb.196.2023.03.23.12.21.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 12:21:14 -0700 (PDT) From: "Uladzislau Rezki (Sony)" <urezki@gmail.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>, Baoquan He <bhe@redhat.com>, Lorenzo Stoakes <lstoakes@gmail.com>, Christoph Hellwig <hch@infradead.org>, Matthew Wilcox <willy@infradead.org>, Dave Chinner <david@fromorbit.com>, Uladzislau Rezki <urezki@gmail.com>, Oleksiy Avramchenko <oleksiy.avramchenko@sony.com> Subject: [PATCH 1/1] mm: vmalloc: Remove a global vmap_blocks xarray Date: Thu, 23 Mar 2023 20:21:11 +0100 Message-Id: <20230323192111.1501308-1-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761189249341563662?= X-GMAIL-MSGID: =?utf-8?q?1761189249341563662?= |
Series |
[1/1] mm: vmalloc: Remove a global vmap_blocks xarray
|
|
Commit Message
Uladzislau Rezki
March 23, 2023, 7:21 p.m. UTC
A global vmap_blocks-xarray array can be contented under
heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The
lock_stat shows that a "vmap_blocks.xa_lock" lock is a
second in a top-list when it comes to contentions:
<snip>
----------------------------------------
class name con-bounces contentions ...
----------------------------------------
vmap_area_lock: 2554079 2554276 ...
--------------
vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70
--------------
vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170
vmap_blocks.xa_lock: 862689 862698 ...
-------------------
vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30
-------------------
vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30
vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
...
<snip>
that is a result of running vm_map_ram()/vm_unmap_ram() in
a loop. The test creates 64(on 64 CPUs system) threads and
each one maps/unmaps 1 page.
After this change the "xa_lock" can be considered as a noise
in the same test condition:
<snip>
...
&xa->xa_lock#1: 10333 10394 ...
--------------
&xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30
&xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
--------------
&xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
&xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30
...
<snip>
This patch does not fix vmap_area_lock/free_vmap_area_lock and
purge_vmap_area_lock bottle-necks, it is rather a separate rework.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
mm/vmalloc.c | 50 ++++++++++++++++++++++++++------------------------
1 file changed, 26 insertions(+), 24 deletions(-)
Comments
On Thu, 23 Mar 2023 20:21:11 +0100 "Uladzislau Rezki (Sony)" <urezki@gmail.com> wrote: > A global vmap_blocks-xarray array can be contented under > heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The > lock_stat shows that a "vmap_blocks.xa_lock" lock is a > second in a top-list when it comes to contentions: > > ... > > This patch does not fix vmap_area_lock/free_vmap_area_lock and > purge_vmap_area_lock bottle-necks, it is rather a separate rework. > > ... > > static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); > > ... > > +static struct vmap_block_queue * > +addr_to_vbq(unsigned long addr) > +{ > + int cpu = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > + return &per_cpu(vmap_block_queue, cpu); > +} Seems strange. vmap_block_queue is not a per-cpu thing in this usage. Instead it's a hash table, indexed off the (hashed) address, not off smp_processor_id(). Yet in other places, vmap_block_queue *is* used in the conventional cpu-local fashion. So we can have CPU A using the cpu-local entry in vmap_block_queue while CPU B is simultaneously using it, having looked it up via `addr'. AFAICT this all works OK, no races. But still, what it's doing is mixing an addr-indexed hashtable with the CPU-indexed array in surprising ways. It would be clearer to make the vmap_blocks array a separate thing from the per-cpu array, although it would presumably use a bit more memory. Can we please at least get a big fat comment in an appropriate place which explains all this to the reader?
On Thu, Mar 23, 2023 at 08:21:11PM +0100, Uladzislau Rezki (Sony) wrote: > A global vmap_blocks-xarray array can be contented under > heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The > lock_stat shows that a "vmap_blocks.xa_lock" lock is a > second in a top-list when it comes to contentions: > > <snip> > ---------------------------------------- > class name con-bounces contentions ... > ---------------------------------------- > vmap_area_lock: 2554079 2554276 ... > -------------- > vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910 > vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0 > vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70 > -------------- > vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910 > vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0 > vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170 > > vmap_blocks.xa_lock: 862689 862698 ... > ------------------- > vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0 > vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30 > ------------------- > vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30 > vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0 > ... > <snip> > > that is a result of running vm_map_ram()/vm_unmap_ram() in > a loop. The test creates 64(on 64 CPUs system) threads and > each one maps/unmaps 1 page. > > After this change the "xa_lock" can be considered as a noise > in the same test condition: > > <snip> > ... > &xa->xa_lock#1: 10333 10394 ... > -------------- > &xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30 > &xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0 > -------------- > &xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0 > &xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30 > ... > <snip> > Nice! Really good to see contention reduced, but in addition I'm a huge fan of us removing the global state in vmalloc and this is a good start. I've noticed a small perf regression after 3 runs of ./test_vmalloc.sh performance from an average of 119356136169 cycles to 120404645782 or +0.9% but this doesn't seem especially egregious. > This patch does not fix vmap_area_lock/free_vmap_area_lock and > purge_vmap_area_lock bottle-necks, it is rather a separate rework. > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> > --- > mm/vmalloc.c | 50 ++++++++++++++++++++++++++------------------------ > 1 file changed, 26 insertions(+), 24 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 978194dc2bb8..13b5342bed9a 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1911,6 +1911,7 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) > struct vmap_block_queue { > spinlock_t lock; > struct list_head free; > + struct xarray vmap_blocks; > }; > > struct vmap_block { > @@ -1927,25 +1928,18 @@ struct vmap_block { > /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */ > static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); > > -/* > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block > - * in the free path. Could get rid of this if we change the API to return a > - * "cookie" from alloc, to be passed to free. But no big deal yet. > - */ Doesn't this comment still apply? Or is the idea of returning the "cookie" not really viable? > -static DEFINE_XARRAY(vmap_blocks); > - > -/* > - * We should probably have a fallback mechanism to allocate virtual memory > - * out of partially filled vmap blocks. However vmap block sizing should be > - * fairly reasonable according to the vmalloc size, so it shouldn't be a > - * big problem. > - */ Again, is this comment no longer relevant? > +static struct vmap_block_queue * > +addr_to_vbq(unsigned long addr) > +{ > + int cpu = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > + return &per_cpu(vmap_block_queue, cpu); > +} Andrew's already commented on this, so I won't dwell but it does seem odd to subdivide by number of possible CPUs rather than just use the actual CPU. I guess your response to his question will also answer mine :) > > -static unsigned long addr_to_vb_idx(unsigned long addr) > +static unsigned long > +addr_to_vb_va_start(unsigned long addr) > { > - addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1); > - addr /= VMAP_BLOCK_SIZE; > - return addr; > + /* A start address of block an address belongs to. */ A nit, but might be worth referring to the assert in vmap_block_vaddr(), as this comment seems a bit redundant otherwise as it is implied by the code it comments. > + return rounddown(addr, VMAP_BLOCK_SIZE); > } > > static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off) > @@ -1953,7 +1947,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off) > unsigned long addr; > > addr = va_start + (pages_off << PAGE_SHIFT); > - BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start)); > + BUG_ON(addr_to_vb_va_start(addr) != addr_to_vb_va_start(va_start)); Maybe nitty, but perhaps better to WARN_ON() here to avoid BUG_ON proliferation? And can't this be the below? WARN_ON(addr_to_vb_va_start(addr) != va_start); > return (void *)addr; > } > > @@ -1970,7 +1964,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > struct vmap_block_queue *vbq; > struct vmap_block *vb; > struct vmap_area *va; > - unsigned long vb_idx; > int node, err; > void *vaddr; > > @@ -2003,8 +1996,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > bitmap_set(vb->used_map, 0, (1UL << order)); > INIT_LIST_HEAD(&vb->free_list); > > - vb_idx = addr_to_vb_idx(va->va_start); > - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask); > + vbq = addr_to_vbq(va->va_start); > + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask); This seems actually like a nice subtle improvement in that we are now indexing always on va_start explicitly and will always load using addr_to_vb_va_start(). > if (err) { > kfree(vb); > free_vmap_area(va); > @@ -2021,9 +2014,11 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > > static void free_vmap_block(struct vmap_block *vb) > { > + struct vmap_block_queue *vbq; > struct vmap_block *tmp; > > - tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start)); > + vbq = addr_to_vbq(vb->va->va_start); > + tmp = xa_erase(&vbq->vmap_blocks, vb->va->va_start); > BUG_ON(tmp != vb); > > spin_lock(&vmap_area_lock); > @@ -2135,6 +2130,7 @@ static void vb_free(unsigned long addr, unsigned long size) > unsigned long offset; > unsigned int order; > struct vmap_block *vb; > + struct vmap_block_queue *vbq; > > BUG_ON(offset_in_page(size)); > BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); > @@ -2143,7 +2139,10 @@ static void vb_free(unsigned long addr, unsigned long size) > > order = get_order(size); > offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; > - vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr)); > + > + vbq = addr_to_vbq(addr); > + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start(addr)); > + > spin_lock(&vb->lock); > bitmap_clear(vb->used_map, offset, (1UL << order)); > spin_unlock(&vb->lock); > @@ -3486,6 +3485,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags > { > char *start; > struct vmap_block *vb; > + struct vmap_block_queue *vbq; > unsigned long offset; > unsigned int rs, re, n; > > @@ -3503,7 +3503,8 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags > * Area is split into regions and tracked with vmap_block, read out > * each region and zero fill the hole between regions. > */ > - vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); > + vbq = addr_to_vbq((unsigned long) addr); > + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start((unsigned long) addr)); > if (!vb) > goto finished; > > @@ -4272,6 +4273,7 @@ void __init vmalloc_init(void) > p = &per_cpu(vfree_deferred, i); > init_llist_head(&p->list); > INIT_WORK(&p->wq, delayed_vfree_work); > + xa_init(&vbq->vmap_blocks); > } > > /* Import existing vmlist entries. */ > -- > 2.30.2 >
On Thu, Mar 23, 2023 at 02:12:53PM -0700, Andrew Morton wrote: > On Thu, 23 Mar 2023 20:21:11 +0100 "Uladzislau Rezki (Sony)" <urezki@gmail.com> wrote: > > > A global vmap_blocks-xarray array can be contented under > > heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The > > lock_stat shows that a "vmap_blocks.xa_lock" lock is a > > second in a top-list when it comes to contentions: > > > > ... > > > > This patch does not fix vmap_area_lock/free_vmap_area_lock and > > purge_vmap_area_lock bottle-necks, it is rather a separate rework. > > > > ... > > > > static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); > > > > ... > > > > +static struct vmap_block_queue * > > +addr_to_vbq(unsigned long addr) > > +{ > > + int cpu = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > > + return &per_cpu(vmap_block_queue, cpu); > > +} > > Seems strange. vmap_block_queue is not a per-cpu thing in this usage. > Instead it's a hash table, indexed off the (hashed) address, not off > smp_processor_id(). > > Yet in other places, vmap_block_queue *is* used in the conventional > cpu-local fashion. > > So we can have CPU A using the cpu-local entry in vmap_block_queue > while CPU B is simultaneously using it, having looked it up via `addr'. > > AFAICT this all works OK, no races. > > But still, what it's doing is mixing an addr-indexed hashtable with the > CPU-indexed array in surprising ways. It would be clearer to make the > vmap_blocks array a separate thing from the per-cpu array, although it > would presumably use a bit more memory. > > Can we please at least get a big fat comment in an appropriate place > which explains all this to the reader? > Yep, i will send out a v2 with all explanation. Indeed i have to add detailed explanation. Thanks! -- Uladzislau Rezki
On Thu, Mar 23, 2023 at 09:46:00PM +0000, Lorenzo Stoakes wrote: > On Thu, Mar 23, 2023 at 08:21:11PM +0100, Uladzislau Rezki (Sony) wrote: > > A global vmap_blocks-xarray array can be contented under > > heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The > > lock_stat shows that a "vmap_blocks.xa_lock" lock is a > > second in a top-list when it comes to contentions: > > > > <snip> > > ---------------------------------------- > > class name con-bounces contentions ... > > ---------------------------------------- > > vmap_area_lock: 2554079 2554276 ... > > -------------- > > vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910 > > vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0 > > vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70 > > -------------- > > vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910 > > vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0 > > vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170 > > > > vmap_blocks.xa_lock: 862689 862698 ... > > ------------------- > > vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0 > > vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30 > > ------------------- > > vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30 > > vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0 > > ... > > <snip> > > > > that is a result of running vm_map_ram()/vm_unmap_ram() in > > a loop. The test creates 64(on 64 CPUs system) threads and > > each one maps/unmaps 1 page. > > > > After this change the "xa_lock" can be considered as a noise > > in the same test condition: > > > > <snip> > > ... > > &xa->xa_lock#1: 10333 10394 ... > > -------------- > > &xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30 > > &xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0 > > -------------- > > &xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0 > > &xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30 > > ... > > <snip> > > > > Nice! Really good to see contention reduced, but in addition I'm a huge fan > of us removing the global state in vmalloc and this is a good start. > > I've noticed a small perf regression after 3 runs of ./test_vmalloc.sh > performance from an average of 119356136169 cycles to 120404645782 or +0.9% > but this doesn't seem especially egregious. > We are lack of extra vm_map_ram()/vm_unmap_ram() tests in the test_vmalloc.sh. It would be good to add them to the test-suite. > > This patch does not fix vmap_area_lock/free_vmap_area_lock and > > purge_vmap_area_lock bottle-necks, it is rather a separate rework. > > > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> > > --- > > mm/vmalloc.c | 50 ++++++++++++++++++++++++++------------------------ > > 1 file changed, 26 insertions(+), 24 deletions(-) > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index 978194dc2bb8..13b5342bed9a 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -1911,6 +1911,7 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) > > struct vmap_block_queue { > > spinlock_t lock; > > struct list_head free; > > + struct xarray vmap_blocks; > > }; > > > > struct vmap_block { > > @@ -1927,25 +1928,18 @@ struct vmap_block { > > /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */ > > static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); > > > > -/* > > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block > > - * in the free path. Could get rid of this if we change the API to return a > > - * "cookie" from alloc, to be passed to free. But no big deal yet. > > - */ > > Doesn't this comment still apply? Or is the idea of returning the "cookie" > not really viable? > Since a vmap_block_queue is a per-cpu thing, though it is not fully serialized in terms of per-cpu classical meaning, IMHO, it is not a big issue. If we return a cookie then, indeed, we do not need to find a vmap_block and performance wise it should be better. For how much, i do not know, it requires data. From the other hand an API has to be changed accordingly. But i can leave the comment! > > -static DEFINE_XARRAY(vmap_blocks); > > - > > -/* > > - * We should probably have a fallback mechanism to allocate virtual memory > > - * out of partially filled vmap blocks. However vmap block sizing should be > > - * fairly reasonable according to the vmalloc size, so it shouldn't be a > > - * big problem. > > - */ > > Again, is this comment no longer relevant? > Looks like yes :) But i am not sure i understand correctly what author meant. It looks like this: <snip> void *vm_map_ram(struct page **pages, unsigned int count, int node) { unsigned long size = (unsigned long)count << PAGE_SHIFT; unsigned long addr; void *mem; if (likely(count <= VMAP_MAX_ALLOC)) { mem = vb_alloc(size, GFP_KERNEL); if (IS_ERR(mem)) return NULL; ... <snip> instead of returning NULL, go directly with a fall-back, that is: <snip> struct vmap_area *va; va = alloc_vmap_area(size, PAGE_SIZE, VMALLOC_START, VMALLOC_END, node, GFP_KERNEL, VMAP_RAM); if (IS_ERR(va)) return NULL; addr = va->va_start; mem = (void *)addr; <snip> > > +static struct vmap_block_queue * > > +addr_to_vbq(unsigned long addr) > > +{ > > + int cpu = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); > > + return &per_cpu(vmap_block_queue, cpu); > > +} > > Andrew's already commented on this, so I won't dwell but it does seem odd > to subdivide by number of possible CPUs rather than just use the actual > CPU. I guess your response to his question will also answer mine :) > I will upload a v2 where i try to explain in detail as much as i can, after that we can see if there are extra comments or questions and discuss if so. > > > > -static unsigned long addr_to_vb_idx(unsigned long addr) > > +static unsigned long > > +addr_to_vb_va_start(unsigned long addr) > > { > > - addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1); > > - addr /= VMAP_BLOCK_SIZE; > > - return addr; > > + /* A start address of block an address belongs to. */ > > A nit, but might be worth referring to the assert in vmap_block_vaddr(), as > this comment seems a bit redundant otherwise as it is implied by the code > it comments. > OK. I can remove that comment. > > + return rounddown(addr, VMAP_BLOCK_SIZE); > > } > > > > static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off) > > @@ -1953,7 +1947,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off) > > unsigned long addr; > > > > addr = va_start + (pages_off << PAGE_SHIFT); > > - BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start)); > > + BUG_ON(addr_to_vb_va_start(addr) != addr_to_vb_va_start(va_start)); > > Maybe nitty, but perhaps better to WARN_ON() here to avoid BUG_ON proliferation? > Indeed, it is better to go with WARN_ON() or even WARN_ON_ONCE(). > And can't this be the below? > > WARN_ON(addr_to_vb_va_start(addr) != va_start); > Yep, it can be. Thanks for it! > > return (void *)addr; > > } > > > > @@ -1970,7 +1964,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > > struct vmap_block_queue *vbq; > > struct vmap_block *vb; > > struct vmap_area *va; > > - unsigned long vb_idx; > > int node, err; > > void *vaddr; > > > > @@ -2003,8 +1996,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > > bitmap_set(vb->used_map, 0, (1UL << order)); > > INIT_LIST_HEAD(&vb->free_list); > > > > - vb_idx = addr_to_vb_idx(va->va_start); > > - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask); > > + vbq = addr_to_vbq(va->va_start); > > + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask); > > This seems actually like a nice subtle improvement in that we are now > indexing always on va_start explicitly and will always load using > addr_to_vb_va_start(). > Yep, because we already have an index, it is a va->va_start. > > > if (err) { > > kfree(vb); > > free_vmap_area(va); > > @@ -2021,9 +2014,11 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > > > > static void free_vmap_block(struct vmap_block *vb) > > { > > + struct vmap_block_queue *vbq; > > struct vmap_block *tmp; > > > > - tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start)); > > + vbq = addr_to_vbq(vb->va->va_start); > > + tmp = xa_erase(&vbq->vmap_blocks, vb->va->va_start); > > BUG_ON(tmp != vb); > > > > spin_lock(&vmap_area_lock); > > @@ -2135,6 +2130,7 @@ static void vb_free(unsigned long addr, unsigned long size) > > unsigned long offset; > > unsigned int order; > > struct vmap_block *vb; > > + struct vmap_block_queue *vbq; > > > > BUG_ON(offset_in_page(size)); > > BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); > > @@ -2143,7 +2139,10 @@ static void vb_free(unsigned long addr, unsigned long size) > > > > order = get_order(size); > > offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; > > - vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr)); > > + > > + vbq = addr_to_vbq(addr); > > + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start(addr)); > > + > > spin_lock(&vb->lock); > > bitmap_clear(vb->used_map, offset, (1UL << order)); > > spin_unlock(&vb->lock); > > @@ -3486,6 +3485,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags > > { > > char *start; > > struct vmap_block *vb; > > + struct vmap_block_queue *vbq; > > unsigned long offset; > > unsigned int rs, re, n; > > > > @@ -3503,7 +3503,8 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags > > * Area is split into regions and tracked with vmap_block, read out > > * each region and zero fill the hole between regions. > > */ > > - vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); > > + vbq = addr_to_vbq((unsigned long) addr); > > + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start((unsigned long) addr)); > > if (!vb) > > goto finished; > > > > @@ -4272,6 +4273,7 @@ void __init vmalloc_init(void) > > p = &per_cpu(vfree_deferred, i); > > init_llist_head(&p->list); > > INIT_WORK(&p->wq, delayed_vfree_work); > > + xa_init(&vbq->vmap_blocks); > > } > > > > /* Import existing vmlist entries. */ > > -- > > 2.30.2 > >
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 978194dc2bb8..13b5342bed9a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1911,6 +1911,7 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) struct vmap_block_queue { spinlock_t lock; struct list_head free; + struct xarray vmap_blocks; }; struct vmap_block { @@ -1927,25 +1928,18 @@ struct vmap_block { /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */ static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); -/* - * XArray of vmap blocks, indexed by address, to quickly find a vmap block - * in the free path. Could get rid of this if we change the API to return a - * "cookie" from alloc, to be passed to free. But no big deal yet. - */ -static DEFINE_XARRAY(vmap_blocks); - -/* - * We should probably have a fallback mechanism to allocate virtual memory - * out of partially filled vmap blocks. However vmap block sizing should be - * fairly reasonable according to the vmalloc size, so it shouldn't be a - * big problem. - */ +static struct vmap_block_queue * +addr_to_vbq(unsigned long addr) +{ + int cpu = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus(); + return &per_cpu(vmap_block_queue, cpu); +} -static unsigned long addr_to_vb_idx(unsigned long addr) +static unsigned long +addr_to_vb_va_start(unsigned long addr) { - addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1); - addr /= VMAP_BLOCK_SIZE; - return addr; + /* A start address of block an address belongs to. */ + return rounddown(addr, VMAP_BLOCK_SIZE); } static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off) @@ -1953,7 +1947,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off) unsigned long addr; addr = va_start + (pages_off << PAGE_SHIFT); - BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start)); + BUG_ON(addr_to_vb_va_start(addr) != addr_to_vb_va_start(va_start)); return (void *)addr; } @@ -1970,7 +1964,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) struct vmap_block_queue *vbq; struct vmap_block *vb; struct vmap_area *va; - unsigned long vb_idx; int node, err; void *vaddr; @@ -2003,8 +1996,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) bitmap_set(vb->used_map, 0, (1UL << order)); INIT_LIST_HEAD(&vb->free_list); - vb_idx = addr_to_vb_idx(va->va_start); - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask); + vbq = addr_to_vbq(va->va_start); + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask); if (err) { kfree(vb); free_vmap_area(va); @@ -2021,9 +2014,11 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) static void free_vmap_block(struct vmap_block *vb) { + struct vmap_block_queue *vbq; struct vmap_block *tmp; - tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start)); + vbq = addr_to_vbq(vb->va->va_start); + tmp = xa_erase(&vbq->vmap_blocks, vb->va->va_start); BUG_ON(tmp != vb); spin_lock(&vmap_area_lock); @@ -2135,6 +2130,7 @@ static void vb_free(unsigned long addr, unsigned long size) unsigned long offset; unsigned int order; struct vmap_block *vb; + struct vmap_block_queue *vbq; BUG_ON(offset_in_page(size)); BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); @@ -2143,7 +2139,10 @@ static void vb_free(unsigned long addr, unsigned long size) order = get_order(size); offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; - vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr)); + + vbq = addr_to_vbq(addr); + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start(addr)); + spin_lock(&vb->lock); bitmap_clear(vb->used_map, offset, (1UL << order)); spin_unlock(&vb->lock); @@ -3486,6 +3485,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags { char *start; struct vmap_block *vb; + struct vmap_block_queue *vbq; unsigned long offset; unsigned int rs, re, n; @@ -3503,7 +3503,8 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags * Area is split into regions and tracked with vmap_block, read out * each region and zero fill the hole between regions. */ - vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); + vbq = addr_to_vbq((unsigned long) addr); + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start((unsigned long) addr)); if (!vb) goto finished; @@ -4272,6 +4273,7 @@ void __init vmalloc_init(void) p = &per_cpu(vfree_deferred, i); init_llist_head(&p->list); INIT_WORK(&p->wq, delayed_vfree_work); + xa_init(&vbq->vmap_blocks); } /* Import existing vmlist entries. */