Message ID | 20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a5a7:0:b0:403:3b70:6f57 with SMTP id d7csp246145vqn; Wed, 29 Nov 2023 02:34:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IEgdxLQ7NrqbYae3Xp9op1XB6rpdy2aAKoRXn+FK613yvfVQS881MCSenlfL0tchEI9s3JK X-Received: by 2002:a05:6808:2115:b0:3b8:37aa:c84f with SMTP id r21-20020a056808211500b003b837aac84fmr25349178oiw.25.1701254095780; Wed, 29 Nov 2023 02:34:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701254095; cv=none; d=google.com; s=arc-20160816; b=GyKvoZ8T9bMb5DfROnjQCh38fcMSp36d3Ec8kDCr5PsI3bc2sq9Sz3/Mogp8fAD4md UgFWU7Q/pKWKqbQZhj1aMDi4xEyMg4GG2p2p/VLRM19glUehtKNstCYU2DDKz9A/ml2O IPw/Oy9hYzDYJHFLOx8dk51ZbKKPlHn5KbTDFKlMUKwTgc+uBmKvk2Ewxg1T8YE0cyqk SBEtkr/WTGw0bXy0y7M+GVqWhI+xaieGmYheDWtcqAr5SZ6OMrWgmtWpdqBgjcy1W2pA lUfyy7DZ3pOs0nYK9Oe4esQ6YCC2AiKofn1IMrngOY/JnfpKmRoUP7Jz5K6+tgT/FHOe El+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:content-transfer-encoding:mime-version :message-id:date:subject:from:dkim-signature:dkim-signature; bh=eoFLVBeLcmc4evqkljz3fngHIKUUCOgba17WIJPWQ+0=; fh=uYnIsWZ9n80gkcnhZMgoujzKtxA7UEl4GQvTBFLbimw=; b=yk/DB6ddP6Vp+uzCEYGMcZ20FhfIZtFFkE2YHMqo7+hzj5F83FbAC/FRPzBze0U06d M9U0kyyF8czRnuRikEDTENC3wp4WYXkfTox1HlfeBV4RSN/04GIWhWq5xfohL45EniS+ zsfyLHx+5i/2pVqE/4E/wnsnh5w9qWA6Ows/ST0L6vIFTxVrrOo8AlXlAo2ZYpsN9U0c CUnnB7MaqllLLeQAGSn581ugxXqHwFzUUsurpP7PFE0pIknSfD8gfM9mPU/A3CeqFqRH CJN99ZWi/619ZFO4nStNMEQaaEKLAh+xiaHB8tVicIUfQg2eCYIxus8sc6CXUv6UU0lJ WKfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=DhAjX4+x; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id y194-20020a62cecb000000b006cd8878e98csi7132535pfg.51.2023.11.29.02.34.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Nov 2023 02:34:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=DhAjX4+x; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id DA3D280C65E2; Wed, 29 Nov 2023 02:34:53 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232396AbjK2JyX (ORCPT <rfc822;toshivichauhan@gmail.com> + 99 others); Wed, 29 Nov 2023 04:54:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231737AbjK2Jxx (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 29 Nov 2023 04:53:53 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2a07:de40:b251:101:10:150:64:1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D93619AB for <linux-kernel@vger.kernel.org>; Wed, 29 Nov 2023 01:53:39 -0800 (PST) Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id CA5F72198B; Wed, 29 Nov 2023 09:53:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1701251616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=eoFLVBeLcmc4evqkljz3fngHIKUUCOgba17WIJPWQ+0=; b=DhAjX4+xb9egQ4e/kFsy8nAm5bsTgOIbdfZmAXEDiYGXpSW0aJyAjkbpr/AWyJD/WnkZ43 VagLHVVyUW7GFZuPbUFq3SMfbAWeB6tbOPAy16pSLY0u0MIOEvip7gZLDuqTe54Go6ncKJ XqtmUJT5s40PAS1HTU/Hn+0fETbHHIQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1701251616; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=eoFLVBeLcmc4evqkljz3fngHIKUUCOgba17WIJPWQ+0=; b=AFGProdnynrmKZ4ksYdy2fXGxF280zC9G/ic+BPCyDtHe4ij6EsYeKmY2cmNaNbYBuFmtq PebK73ONoAXcrOCQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id AC1FA1388B; Wed, 29 Nov 2023 09:53:36 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 8pWoKSAKZ2UrfQAAD6G6ig (envelope-from <vbabka@suse.cz>); Wed, 29 Nov 2023 09:53:36 +0000 From: Vlastimil Babka <vbabka@suse.cz> Subject: [PATCH RFC v3 0/9] SLUB percpu array caches and maple tree nodes Date: Wed, 29 Nov 2023 10:53:25 +0100 Message-Id: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-B4-Tracking: v=1; b=H4sIABUKZ2UC/z2MQQ6CMBAAv0L27JJuIQU8mZj4AK/GQymLNCo0X SFGwt9tPHicSWZWEI6eBfbZCpEXL34aExS7DNxgxxuj7xKDVrog0jXKY24xcHRhRmfdwIJNWVL daEXUVZDCELn379/0AufTEa5J9nF64muIbP8/VZMiUxhd5UYrgw0urW3v9iCzcO4+sG1fIbWTd aAAAAA= To: Christoph Lameter <cl@linux.com>, Pekka Enberg <penberg@kernel.org>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Matthew Wilcox <willy@infradead.org>, "Liam R. Howlett" <Liam.Howlett@oracle.com> Cc: Andrew Morton <akpm@linux-foundation.org>, Roman Gushchin <roman.gushchin@linux.dev>, Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, kasan-dev@googlegroups.com, Vlastimil Babka <vbabka@suse.cz> X-Mailer: b4 0.12.4 Authentication-Results: smtp-out1.suse.de; none X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; FREEMAIL_ENVRCPT(0.00)[gmail.com]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TAGGED_RCPT(0.00)[]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; BAYES_HAM(-3.00)[100.00%]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.cz:s=susede2_rsa,suse.cz:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-1.000]; RCPT_COUNT_TWELVE(0.00)[17]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.cz:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_CC(0.00)[linux-foundation.org,linux.dev,gmail.com,google.com,kvack.org,vger.kernel.org,lists.infradead.org,googlegroups.com,suse.cz]; RCVD_TLS_ALL(0.00)[]; SUSPICIOUS_RECIPS(1.50)[] X-Spam-Score: -2.80 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 29 Nov 2023 02:34:54 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783894214786343472 X-GMAIL-MSGID: 1783894214786343472 |
Series |
SLUB percpu array caches and maple tree nodes
|
|
Message
Vlastimil Babka
Nov. 29, 2023, 9:53 a.m. UTC
Also in git [1]. Changes since v2 [2]: - empty cache refill/full cache flush using internal bulk operations - bulk alloc/free operations also use the cache - memcg, KASAN etc hooks processed when the cache is used for the operation - now fully transparent - NUMA node-specific allocations now explicitly bypass the cache [1] https://git.kernel.org/vbabka/l/slub-percpu-caches-v3r2 [2] https://lore.kernel.org/all/20230810163627.6206-9-vbabka@suse.cz/ ---- At LSF/MM I've mentioned that I see several use cases for introducing opt-in percpu arrays for caching alloc/free objects in SLUB. This is my first exploration of this idea, speficially for the use case of maple tree nodes. The assumptions are: - percpu arrays will be faster thank bulk alloc/free which needs relatively long freelists to work well. Especially in the freeing case we need the nodes to come from the same slab (or small set of those) - preallocation for the worst case of needed nodes for a tree operation that can't reclaim due to locks is wasteful. We could instead expect that most of the time percpu arrays would satisfy the constained allocations, and in the rare cases it does not we can dip into GFP_ATOMIC reserves temporarily. So instead of preallocation just prefill the arrays. - NUMA locality of the nodes is not a concern as the nodes of a process's VMA tree end up all over the place anyway. Patches 1-4 are preparatory, but should also work as standalone fixes and cleanups, so I would like to add them for 6.8 after review, and probably rebasing on top of the current series in slab/for-next, mainly SLAB removal, as it should be easier to follow than the necessary conflict resolutions. Patch 5 adds the per-cpu array caches support. Locking is stolen from Mel's recent page allocator's pcplists implementation so it can avoid disabling IRQs and just disable preemption, but the trylocks can fail in rare situations - in most cases the locks are uncontended so the locking should be cheap. Then maple tree is modified in patches 6-9 to benefit from this. From that, only Liam's patches make sense and the rest are my crude hacks. Liam is already working on a better solution for the maple tree side. I'm including this only so the bots have something for testing that uses the new code. The stats below thus likely don't reflect the full benefits that can be achieved from cache prefill vs preallocation. I've briefly tested this with virtme VM boot and checking the stats from CONFIG_SLUB_STATS in sysfs. Patch 5: slub per-cpu array caches implemented including new counters but maple tree doesn't use them yet /sys/kernel/slab/maple_node # grep . alloc_cpu_cache alloc_*path free_cpu_cache free_*path cpu_cache* | cut -d' ' -f1 alloc_cpu_cache:0 alloc_fastpath:20213 alloc_slowpath:1741 free_cpu_cache:0 free_fastpath:10754 free_slowpath:9232 cpu_cache_flush:0 cpu_cache_refill:0 Patch 7: maple node cache creates percpu array with 32 entries, not changed anything else majority alloc/free operations are satisfied by the array, number of flushed/refilled objects is 1/3 of the cached operations so the hit ratio is 2/3. Note the flush/refill operations also increase the fastpath/slowpath counters, thus the majority of those indeed come from the flushes and refills. alloc_cpu_cache:11880 alloc_fastpath:4131 alloc_slowpath:587 free_cpu_cache:13075 free_fastpath:437 free_slowpath:2216 cpu_cache_flush:4336 cpu_cache_refill:3216 Patch 9: This tries to replace maple tree's preallocation with the cache prefill. Thus should reduce all of the counters as many of the preallocations for the worst-case scenarios are not needed in the end. But according to Liam it's not the full solution, which probably explains why the reduction is only modest. alloc_cpu_cache:11540 alloc_fastpath:3756 alloc_slowpath:512 free_cpu_cache:12775 free_fastpath:388 free_slowpath:1944 cpu_cache_flush:3904 cpu_cache_refill:2742 --- Liam R. Howlett (2): tools: Add SLUB percpu array functions for testing maple_tree: Remove MA_STATE_PREALLOC Vlastimil Babka (7): mm/slub: fix bulk alloc and free stats mm/slub: introduce __kmem_cache_free_bulk() without free hooks mm/slub: handle bulk and single object freeing separately mm/slub: free KFENCE objects in slab_free_hook() mm/slub: add opt-in percpu array cache of objects maple_tree: use slub percpu array maple_tree: replace preallocation with slub percpu array prefill include/linux/slab.h | 4 + include/linux/slub_def.h | 12 + lib/maple_tree.c | 46 ++- mm/Kconfig | 1 + mm/slub.c | 561 +++++++++++++++++++++++++++++--- tools/include/linux/slab.h | 4 + tools/testing/radix-tree/linux.c | 14 + tools/testing/radix-tree/linux/kernel.h | 1 + 8 files changed, 578 insertions(+), 65 deletions(-) --- base-commit: b85ea95d086471afb4ad062012a4d73cd328fa86 change-id: 20231128-slub-percpu-caches-9441892011d7 Best regards,
Comments
On Wed, 29 Nov 2023, Vlastimil Babka wrote: > At LSF/MM I've mentioned that I see several use cases for introducing > opt-in percpu arrays for caching alloc/free objects in SLUB. This is my > first exploration of this idea, speficially for the use case of maple > tree nodes. The assumptions are: Hohumm... So we are not really removing SLAB but merging SLAB features into SLUB. In addition to per cpu slabs, we now have per cpu queues. > - percpu arrays will be faster thank bulk alloc/free which needs > relatively long freelists to work well. Especially in the freeing case > we need the nodes to come from the same slab (or small set of those) Percpu arrays require the code to handle individual objects. Handling freelists in partial SLABS means that numerous objects can be handled at once by handling the pointer to the list of objects. In order to make the SLUB in page freelists work better you need to have larger freelist and that comes with larger page sizes. I.e. boot with slub_min_order=5 or so to increase performance. Also this means increasing TLB pressure. The in page freelists of SLUB cause objects from the same page be served. The SLAB queueing approach results in objects being mixed from any address and thus neighboring objects may require more TLB entries. > - preallocation for the worst case of needed nodes for a tree operation > that can't reclaim due to locks is wasteful. We could instead expect > that most of the time percpu arrays would satisfy the constained > allocations, and in the rare cases it does not we can dip into > GFP_ATOMIC reserves temporarily. So instead of preallocation just > prefill the arrays. The partial percpu slabs could already do the same. > - NUMA locality of the nodes is not a concern as the nodes of a > process's VMA tree end up all over the place anyway. NUMA locality is already controlled by the user through the node specification for percpu slabs. All objects coming from the same in page freelist of SLUB have the same NUMA locality which simplifies things. If you would consider NUMA locality for the percpu array then you'd be back to my beloved alien caches. We were not able to avoid that when we tuned SLAB for maximum performance. > Patch 5 adds the per-cpu array caches support. Locking is stolen from > Mel's recent page allocator's pcplists implementation so it can avoid > disabling IRQs and just disable preemption, but the trylocks can fail in > rare situations - in most cases the locks are uncontended so the locking > should be cheap. Ok the locking is new but the design follows basic SLAB queue handling.
On Wed, Nov 29, 2023 at 12:16:17PM -0800, Christoph Lameter (Ampere) wrote: > Percpu arrays require the code to handle individual objects. Handling > freelists in partial SLABS means that numerous objects can be handled at > once by handling the pointer to the list of objects. That works great until you hit degenerate cases like having one or two free objects per slab. Users have hit these cases and complained about them. Arrays are much cheaper than lists, around 10x in my testing. > In order to make the SLUB in page freelists work better you need to have > larger freelist and that comes with larger page sizes. I.e. boot with > slub_min_order=5 or so to increase performance. That comes with its own problems, of course. > Also this means increasing TLB pressure. The in page freelists of SLUB cause > objects from the same page be served. The SLAB queueing approach > results in objects being mixed from any address and thus neighboring objects > may require more TLB entries. Is that still a concern for modern CPUs? We're using 1GB TLB entries these days, and there are usually thousands of TLB entries. This feels like more of a concern for a 90s era CPU.
On 11/29/23 21:16, Christoph Lameter (Ampere) wrote: > On Wed, 29 Nov 2023, Vlastimil Babka wrote: > >> At LSF/MM I've mentioned that I see several use cases for introducing >> opt-in percpu arrays for caching alloc/free objects in SLUB. This is my >> first exploration of this idea, speficially for the use case of maple >> tree nodes. The assumptions are: > > Hohumm... So we are not really removing SLAB but merging SLAB features > into SLUB. Hey, you've tried a similar thing back in 2010 too :) https://lore.kernel.org/all/20100521211541.003062117@quilx.com/ In addition to per cpu slabs, we now have per cpu queues. But importantly, it's very consciously opt-in. Whether the caches using percpu arrays can also skip per cpu (partial) slabs, remains to be seen. >> - percpu arrays will be faster thank bulk alloc/free which needs >> relatively long freelists to work well. Especially in the freeing case >> we need the nodes to come from the same slab (or small set of those) > > Percpu arrays require the code to handle individual objects. Handling > freelists in partial SLABS means that numerous objects can be handled at > once by handling the pointer to the list of objects. > > In order to make the SLUB in page freelists work better you need to have > larger freelist and that comes with larger page sizes. I.e. boot with > slub_min_order=5 or so to increase performance. In the freeing case, you might still end up with objects mixed from different slab pages, so the detached freelist building will be inefficient. > Also this means increasing TLB pressure. The in page freelists of SLUB > cause objects from the same page be served. The SLAB queueing approach > results in objects being mixed from any address and thus neighboring > objects may require more TLB entries. As Willy noted, we have 1GB entries in directmap. Also we found out that even if there are actions that cause it to fragment, it's not worth trying to minimize the fragmentations - https://lwn.net/Articles/931406/ >> - preallocation for the worst case of needed nodes for a tree operation >> that can't reclaim due to locks is wasteful. We could instead expect >> that most of the time percpu arrays would satisfy the constained >> allocations, and in the rare cases it does not we can dip into >> GFP_ATOMIC reserves temporarily. So instead of preallocation just >> prefill the arrays. > > The partial percpu slabs could already do the same. Possibly for the prefill, but efficient freeing will always be an issue. >> - NUMA locality of the nodes is not a concern as the nodes of a >> process's VMA tree end up all over the place anyway. > > NUMA locality is already controlled by the user through the node > specification for percpu slabs. All objects coming from the same in page > freelist of SLUB have the same NUMA locality which simplifies things. > > If you would consider NUMA locality for the percpu array then you'd be > back to my beloved alien caches. We were not able to avoid that when we > tuned SLAB for maximum performance. True, it's easier not to support NUMA locality. >> Patch 5 adds the per-cpu array caches support. Locking is stolen from >> Mel's recent page allocator's pcplists implementation so it can avoid >> disabling IRQs and just disable preemption, but the trylocks can fail in >> rare situations - in most cases the locks are uncontended so the locking >> should be cheap. > > Ok the locking is new but the design follows basic SLAB queue handling.
On Wed, 29 Nov 2023, Matthew Wilcox wrote: >> In order to make the SLUB in page freelists work better you need to have >> larger freelist and that comes with larger page sizes. I.e. boot with >> slub_min_order=5 or so to increase performance. > > That comes with its own problems, of course. Well I thought you were solving those with the folios? >> Also this means increasing TLB pressure. The in page freelists of SLUB cause >> objects from the same page be served. The SLAB queueing approach >> results in objects being mixed from any address and thus neighboring objects >> may require more TLB entries. > > Is that still a concern for modern CPUs? We're using 1GB TLB entries > these days, and there are usually thousands of TLB entries. This feels > like more of a concern for a 90s era CPU. ARM kernel memory is mapped by 4K entries by default since rodata=full is the default. Security concerns screw it up.