From patchwork Wed Feb 1 19:50:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 51541 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp480208wrn; Wed, 1 Feb 2023 11:58:44 -0800 (PST) X-Google-Smtp-Source: AK7set+C11LL42MYYt0M4xsaZ6Uh/5fErkED++CsQHXmg2SmRFPwMvVNtBJaDdfmryBop8e+iEQh X-Received: by 2002:a17:907:9724:b0:88d:d76d:8527 with SMTP id jg36-20020a170907972400b0088dd76d8527mr4736381ejc.47.1675281524686; Wed, 01 Feb 2023 11:58:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675281524; cv=none; d=google.com; s=arc-20160816; b=zUm6qhOOHJH9LXDLmUpWZKKjqx9mXIArv97B59547GwUb6uzK1RoSrgtQtZuqgT1kS olYcIoya1ZCxLmz7mCS4/VgPK4zjNJbJm/l0PnPkIWjJzd3HzfXshmFoF+V0eWTAApKQ Jo794elqoWvym/OBzVM/zFYQ47X2VE7UJoQOBBT00I+IuLukWf7Y/y+t5H+9pNyO2K9b yizEp2eXNKGghsCgMWms3ITQ7hcOAzCzlSImPnrQtnly/QINCoYyPNOR83sUbNMWfDxg MG54A+vdMLaE+YXAuQm3D5qCVPw3LKcyM2tLkxmGzouKLbKiNURE9A1TJ3imdUtqt5Zf Absg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=7C1hmy1E7+0MQkK/mZzmUZ/gKoFVIBd7d2YZY/3tmcQ=; b=SfKVP21uqeGtn74gh9MqCKnLgfQj1tmAFWMPfX+595agGplzfWhb98N9iLaszF7ATK 1zGJQabFWdSMhhJvK8R90H+KrJxJYzD/XT6S6YTTogFoQzA/JIYJEjk6p8X0+oIuR62C XNZz5Tk6j1aKjZHHNpNGIS8bQYa12aef2REwsCSsVRPU/XzEcm/PQt5shsFaRBo5oU0y 8DcELg3TIvKf5mxNR8SmLAQMq0uzTJUNodA245OIKbQhfU/AF7DcRaF8y3YDFcTAx2bX 1tGguffjav1xqfAnCfcW4ytWyxVj23GZRs+LWmCUeBByUpCUzrtooC5MobugYCTo5vEA 9X4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=K0TUK8Jf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fg22-20020a1709069c5600b00888656e5ffdsi11033874ejc.233.2023.02.01.11.58.20; Wed, 01 Feb 2023 11:58:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=K0TUK8Jf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231382AbjBATxY (ORCPT + 99 others); Wed, 1 Feb 2023 14:53:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231779AbjBATxV (ORCPT ); Wed, 1 Feb 2023 14:53:21 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E75055399E for ; Wed, 1 Feb 2023 11:52:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=7C1hmy1E7+0MQkK/mZzmUZ/gKoFVIBd7d2YZY/3tmcQ=; b=K0TUK8Jf/YQ1ZfGDvVwedSuO7AIlE/pq6gzEq+CPDDSWHcwRiFTlrOlU80+LFz3rdlkZtR u2Lzi0aH89+0/KJs3q+8vjwf2V5XAyYZs/ODIHuEZGYiwFVXYw/ABjcM8DmImByOVhZbyE soCbd52DKRdXNFRK3eX9qVXTFuYXCZ4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-511-mZt9PK5VNHS2IdT6gYfx_w-1; Wed, 01 Feb 2023 14:52:31 -0500 X-MC-Unique: mZt9PK5VNHS2IdT6gYfx_w-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4A1D73C02558; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 182731121339; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 0D1C1403C42BF; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.411744803@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:14 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 1/5] mm/vmstat: remove remote node draining References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756659999909376429?= X-GMAIL-MSGID: =?utf-8?q?1756659999909376429?= Draining of pages from the local pcp for a remote zone was necessary since: "Note that remote node draining is a somewhat esoteric feature that is required on large NUMA systems because otherwise significant portions of system memory can become trapped in pcp queues. The number of pcp is determined by the number of processors and nodes in a system. A system with 4 processors and 2 nodes has 8 pcps which is okay. But a system with 1024 processors and 512 nodes has 512k pcps with a high potential for large amount of memory being caught in them." Since commit 443c2accd1b6679a1320167f8f56eed6536b806e ("mm/page_alloc: remotely drain per-cpu lists"), drain_all_pages() is able to remotely free those pages when necessary. Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/include/linux/mmzone.h =================================================================== --- linux-vmstat-remote.orig/include/linux/mmzone.h +++ linux-vmstat-remote/include/linux/mmzone.h @@ -577,9 +577,6 @@ struct per_cpu_pages { int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ short free_factor; /* batch scaling factor during free */ -#ifdef CONFIG_NUMA - short expire; /* When 0, remote pagesets are drained */ -#endif /* Lists of pages, one per migrate type stored on the pcp-lists */ struct list_head lists[NR_PCP_LISTS]; Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -803,7 +803,7 @@ static int fold_diff(int *zone_diff, int * * The function returns the number of global counters updated. */ -static int refresh_cpu_vm_stats(bool do_pagesets) +static int refresh_cpu_vm_stats(void) { struct pglist_data *pgdat; struct zone *zone; @@ -814,9 +814,6 @@ static int refresh_cpu_vm_stats(bool do_ for_each_populated_zone(zone) { struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats; -#ifdef CONFIG_NUMA - struct per_cpu_pages __percpu *pcp = zone->per_cpu_pageset; -#endif for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { int v; @@ -826,44 +823,8 @@ static int refresh_cpu_vm_stats(bool do_ atomic_long_add(v, &zone->vm_stat[i]); global_zone_diff[i] += v; -#ifdef CONFIG_NUMA - /* 3 seconds idle till flush */ - __this_cpu_write(pcp->expire, 3); -#endif } } -#ifdef CONFIG_NUMA - - if (do_pagesets) { - cond_resched(); - /* - * Deal with draining the remote pageset of this - * processor - * - * Check if there are pages remaining in this pageset - * if not then there is nothing to expire. - */ - if (!__this_cpu_read(pcp->expire) || - !__this_cpu_read(pcp->count)) - continue; - - /* - * We never drain zones local to this processor. - */ - if (zone_to_nid(zone) == numa_node_id()) { - __this_cpu_write(pcp->expire, 0); - continue; - } - - if (__this_cpu_dec_return(pcp->expire)) - continue; - - if (__this_cpu_read(pcp->count)) { - drain_zone_pages(zone, this_cpu_ptr(pcp)); - changes++; - } - } -#endif } for_each_online_pgdat(pgdat) { @@ -1864,7 +1825,7 @@ int sysctl_stat_interval __read_mostly = #ifdef CONFIG_PROC_FS static void refresh_vm_stats(struct work_struct *work) { - refresh_cpu_vm_stats(true); + refresh_cpu_vm_stats(); } int vmstat_refresh(struct ctl_table *table, int write, @@ -1928,7 +1889,7 @@ int vmstat_refresh(struct ctl_table *tab static void vmstat_update(struct work_struct *w) { - if (refresh_cpu_vm_stats(true)) { + if (refresh_cpu_vm_stats()) { /* * Counters were updated so we expect more updates * to occur in the future. Keep on running the @@ -1991,7 +1952,7 @@ void quiet_vmstat(void) * it would be too expensive from this path. * vmstat_shepherd will take care about that for us. */ - refresh_cpu_vm_stats(false); + refresh_cpu_vm_stats(); } /* From patchwork Wed Feb 1 19:50:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 51545 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp482745wrn; Wed, 1 Feb 2023 12:03:19 -0800 (PST) X-Google-Smtp-Source: AK7set+SJ3QxH3NW5ERAE8ztSg66FLCauHigW9UroRCaFpXY0+R7Iq304nidFS1HWvo/n/wroRgu X-Received: by 2002:a17:906:4dc9:b0:88d:7823:458b with SMTP id f9-20020a1709064dc900b0088d7823458bmr3392469ejw.48.1675281798809; Wed, 01 Feb 2023 12:03:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675281798; cv=none; d=google.com; s=arc-20160816; b=0F9DQNSYWFtZX+D9bhZFomdOoIwS182oyPbstKNJxoupDaSIruU3CRFRr++mffFL5F kY4VwHnR3koNFAr4ts4PS3sRLq8ADxoFqwSs5ezjfNXn/rtS8E/RqGvrppquEWpvfIDL QaZkEJWqsxeHlgk6vbQMzksUtvO2FFL0n+VnhoTwmADMRw4/YA+LZWALHsB1My26kqEY rsH8EQKZOXUEsx3PEf1EvRKFwwUaix+XEl3ohzYMKqUd/QbSoaCrRvoCHYC0zwjAeTO6 j/akJLFdOfLkXyKjMqS3uZgheaTya0AiwMhLNpdq5IEyeQq9uFAQG8eEGLqAiUdCEC1c 5lWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=3slieG8t4CDqujz/admEP/IS8ck8iEpPaLpD8PiApzU=; b=uoomLkrDZ1FCL4DCMAUagqBoPsz0OTcwP8OTJz9/B2nhIX7TFcqJbocgqVUlCbnMip gbXfaops8H5sGUgAHsmaC7uockgJnY0ncVyqJ7Z0h9jRJttFkKEDs1DiZ793ttSfpmr0 YSr8bqkqSl3YCy39kvjgov01jW7JwQOR9whbO1EXy8KGBf0edRU0bYGV5qNl4XkClS0s KTXtfWWKzXKTPIGyntpS57WYlTuT2QCgG1cVT6/Y6TnpYZF/NzGmPK8cdY9T/Re6UO0+ /0PiJXhnEP4BS4mQRkGeqVRZpXNeyN8MAByqI3V7LUrb5HgVX7kaoGUt/EXyRLHuHTjO +naQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dDT47h26; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j5-20020a170906104500b0088e467b39f7si2287309ejj.160.2023.02.01.12.02.53; Wed, 01 Feb 2023 12:03:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dDT47h26; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232009AbjBATxb (ORCPT + 99 others); Wed, 1 Feb 2023 14:53:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbjBATxX (ORCPT ); Wed, 1 Feb 2023 14:53:23 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E763C6E408 for ; Wed, 1 Feb 2023 11:52:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=3slieG8t4CDqujz/admEP/IS8ck8iEpPaLpD8PiApzU=; b=dDT47h26d7zzO8pa01/T1YRaH4G6EQAgfKIOdxLxxkHy7OSZE5zxCJIdgKQM2vQJyzYDXz 0NbRuH57aTBJ7mK0jFWf05x285CKamVXN0F4BOQjD4RjMDKAf2mkqGmg8wxpnJMAbbGbww tqKwUdGLhdtjz3MS5n7tZyLlhHvD3R0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-558-Q-mtDjocPpCEVfJ-vCta9w-1; Wed, 01 Feb 2023 14:52:31 -0500 X-MC-Unique: Q-mtDjocPpCEVfJ-vCta9w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6E9D12806059; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0BC6EC15BAE; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 10943403C47C1; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.436627422@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:15 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 2/5] mm/vmstat: switch counter modification to cmpxchg References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756660287643435444?= X-GMAIL-MSGID: =?utf-8?q?1756660287643435444?= In preparation to switch vmstat shepherd to flush per-CPU counters remotely, switch all functions that modify the counters to use cmpxchg. To test the performance difference, a page allocator microbenchmark: https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/bench/page_bench01.c with loops=1000000 was used, on Intel Core i7-11850H @ 2.50GHz. For the single_page_alloc_free test, which does /** Loop to measure **/ for (i = 0; i < rec->loops; i++) { my_page = alloc_page(gfp_mask); if (unlikely(my_page == NULL)) return 0; __free_page(my_page); } Unit is cycles. Vanilla Patched Diff 159 156 -1.9% Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -334,6 +334,188 @@ void set_pgdat_percpu_threshold(pg_data_ } } +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +/* + * If we have cmpxchg_local support then we do not need to incur the overhead + * that comes with local_irq_save/restore if we use this_cpu_cmpxchg. + * + * mod_state() modifies the zone counter state through atomic per cpu + * operations. + * + * Overstep mode specifies how overstep should handled: + * 0 No overstepping + * 1 Overstepping half of threshold + * -1 Overstepping minus half of threshold + */ +static inline void mod_zone_state(struct zone *zone, enum zone_stat_item item, + long delta, int overstep_mode) +{ + struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats; + s8 __percpu *p = pcp->vm_stat_diff + item; + long o, n, t, z; + + do { + z = 0; /* overflow to zone counters */ + + /* + * The fetching of the stat_threshold is racy. We may apply + * a counter threshold to the wrong the cpu if we get + * rescheduled while executing here. However, the next + * counter update will apply the threshold again and + * therefore bring the counter under the threshold again. + * + * Most of the time the thresholds are the same anyways + * for all cpus in a zone. + */ + t = this_cpu_read(pcp->stat_threshold); + + o = this_cpu_read(*p); + n = delta + o; + + if (abs(n) > t) { + int os = overstep_mode * (t >> 1); + + /* Overflow must be added to zone counters */ + z = n + os; + n = -os; + } + } while (this_cpu_cmpxchg(*p, o, n) != o); + + if (z) + zone_page_state_add(z, zone, item); +} + +void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, + long delta) +{ + mod_zone_state(zone, item, delta, 0); +} +EXPORT_SYMBOL(mod_zone_page_state); + +void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, + long delta) +{ + mod_zone_state(zone, item, delta, 0); +} +EXPORT_SYMBOL(__mod_zone_page_state); + +void inc_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, 1, 1); +} +EXPORT_SYMBOL(inc_zone_page_state); + +void __inc_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, 1, 1); +} +EXPORT_SYMBOL(__inc_zone_page_state); + +void dec_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, -1, -1); +} +EXPORT_SYMBOL(dec_zone_page_state); + +void __dec_zone_page_state(struct page *page, enum zone_stat_item item) +{ + mod_zone_state(page_zone(page), item, -1, -1); +} +EXPORT_SYMBOL(__dec_zone_page_state); + +static inline void mod_node_state(struct pglist_data *pgdat, + enum node_stat_item item, + int delta, int overstep_mode) +{ + struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; + s8 __percpu *p = pcp->vm_node_stat_diff + item; + long o, n, t, z; + + if (vmstat_item_in_bytes(item)) { + /* + * Only cgroups use subpage accounting right now; at + * the global level, these items still change in + * multiples of whole pages. Store them as pages + * internally to keep the per-cpu counters compact. + */ + VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); + delta >>= PAGE_SHIFT; + } + + do { + z = 0; /* overflow to node counters */ + + /* + * The fetching of the stat_threshold is racy. We may apply + * a counter threshold to the wrong the cpu if we get + * rescheduled while executing here. However, the next + * counter update will apply the threshold again and + * therefore bring the counter under the threshold again. + * + * Most of the time the thresholds are the same anyways + * for all cpus in a node. + */ + t = this_cpu_read(pcp->stat_threshold); + + o = this_cpu_read(*p); + n = delta + o; + + if (abs(n) > t) { + int os = overstep_mode * (t >> 1); + + /* Overflow must be added to node counters */ + z = n + os; + n = -os; + } + } while (this_cpu_cmpxchg(*p, o, n) != o); + + if (z) + node_page_state_add(z, pgdat, item); +} + +void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, + long delta) +{ + mod_node_state(pgdat, item, delta, 0); +} +EXPORT_SYMBOL(mod_node_page_state); + +void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, + long delta) +{ + mod_node_state(pgdat, item, delta, 0); +} +EXPORT_SYMBOL(__mod_node_page_state); + +void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) +{ + mod_node_state(pgdat, item, 1, 1); +} + +void inc_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, 1, 1); +} +EXPORT_SYMBOL(inc_node_page_state); + +void __inc_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, 1, 1); +} +EXPORT_SYMBOL(__inc_node_page_state); + +void dec_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, -1, -1); +} +EXPORT_SYMBOL(dec_node_page_state); + +void __dec_node_page_state(struct page *page, enum node_stat_item item) +{ + mod_node_state(page_pgdat(page), item, -1, -1); +} +EXPORT_SYMBOL(__dec_node_page_state); +#else /* * For use when we know that interrupts are disabled, * or when we know that preemption is disabled and that @@ -541,149 +723,6 @@ void __dec_node_page_state(struct page * } EXPORT_SYMBOL(__dec_node_page_state); -#ifdef CONFIG_HAVE_CMPXCHG_LOCAL -/* - * If we have cmpxchg_local support then we do not need to incur the overhead - * that comes with local_irq_save/restore if we use this_cpu_cmpxchg. - * - * mod_state() modifies the zone counter state through atomic per cpu - * operations. - * - * Overstep mode specifies how overstep should handled: - * 0 No overstepping - * 1 Overstepping half of threshold - * -1 Overstepping minus half of threshold -*/ -static inline void mod_zone_state(struct zone *zone, - enum zone_stat_item item, long delta, int overstep_mode) -{ - struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats; - s8 __percpu *p = pcp->vm_stat_diff + item; - long o, n, t, z; - - do { - z = 0; /* overflow to zone counters */ - - /* - * The fetching of the stat_threshold is racy. We may apply - * a counter threshold to the wrong the cpu if we get - * rescheduled while executing here. However, the next - * counter update will apply the threshold again and - * therefore bring the counter under the threshold again. - * - * Most of the time the thresholds are the same anyways - * for all cpus in a zone. - */ - t = this_cpu_read(pcp->stat_threshold); - - o = this_cpu_read(*p); - n = delta + o; - - if (abs(n) > t) { - int os = overstep_mode * (t >> 1) ; - - /* Overflow must be added to zone counters */ - z = n + os; - n = -os; - } - } while (this_cpu_cmpxchg(*p, o, n) != o); - - if (z) - zone_page_state_add(z, zone, item); -} - -void mod_zone_page_state(struct zone *zone, enum zone_stat_item item, - long delta) -{ - mod_zone_state(zone, item, delta, 0); -} -EXPORT_SYMBOL(mod_zone_page_state); - -void inc_zone_page_state(struct page *page, enum zone_stat_item item) -{ - mod_zone_state(page_zone(page), item, 1, 1); -} -EXPORT_SYMBOL(inc_zone_page_state); - -void dec_zone_page_state(struct page *page, enum zone_stat_item item) -{ - mod_zone_state(page_zone(page), item, -1, -1); -} -EXPORT_SYMBOL(dec_zone_page_state); - -static inline void mod_node_state(struct pglist_data *pgdat, - enum node_stat_item item, int delta, int overstep_mode) -{ - struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats; - s8 __percpu *p = pcp->vm_node_stat_diff + item; - long o, n, t, z; - - if (vmstat_item_in_bytes(item)) { - /* - * Only cgroups use subpage accounting right now; at - * the global level, these items still change in - * multiples of whole pages. Store them as pages - * internally to keep the per-cpu counters compact. - */ - VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); - delta >>= PAGE_SHIFT; - } - - do { - z = 0; /* overflow to node counters */ - - /* - * The fetching of the stat_threshold is racy. We may apply - * a counter threshold to the wrong the cpu if we get - * rescheduled while executing here. However, the next - * counter update will apply the threshold again and - * therefore bring the counter under the threshold again. - * - * Most of the time the thresholds are the same anyways - * for all cpus in a node. - */ - t = this_cpu_read(pcp->stat_threshold); - - o = this_cpu_read(*p); - n = delta + o; - - if (abs(n) > t) { - int os = overstep_mode * (t >> 1) ; - - /* Overflow must be added to node counters */ - z = n + os; - n = -os; - } - } while (this_cpu_cmpxchg(*p, o, n) != o); - - if (z) - node_page_state_add(z, pgdat, item); -} - -void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, - long delta) -{ - mod_node_state(pgdat, item, delta, 0); -} -EXPORT_SYMBOL(mod_node_page_state); - -void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) -{ - mod_node_state(pgdat, item, 1, 1); -} - -void inc_node_page_state(struct page *page, enum node_stat_item item) -{ - mod_node_state(page_pgdat(page), item, 1, 1); -} -EXPORT_SYMBOL(inc_node_page_state); - -void dec_node_page_state(struct page *page, enum node_stat_item item) -{ - mod_node_state(page_pgdat(page), item, -1, -1); -} -EXPORT_SYMBOL(dec_node_page_state); -#else /* * Use interrupt disable to serialize counter updates */ From patchwork Wed Feb 1 19:50:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 51543 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp480327wrn; Wed, 1 Feb 2023 11:59:04 -0800 (PST) X-Google-Smtp-Source: AK7set+zxSsUaaBOQSIlYjWgf7nPX2zCkgbNMhqZLbOM8QPbVnFNqGVuFSmiiKgq6bEwZ6SWLvur X-Received: by 2002:a17:907:629b:b0:878:4e5a:18b8 with SMTP id nd27-20020a170907629b00b008784e5a18b8mr3517625ejc.66.1675281543944; Wed, 01 Feb 2023 11:59:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675281543; cv=none; d=google.com; s=arc-20160816; b=c//SG15gD7ctH8uHRrqCi/Vx6Q367N5vUq9N2pzt/vklHTTbd6zuDlnIT/ToAygsh6 gh9tLwNTckyAg591sxUrW9DGHH6CXl6nZVl04oRTPesROGw1HUniJo9toZq7TrTJFEqW bb2OPJWhJVJkcL/hHJiCRM/iGDIH+wxi8igucoF2TygGO96uoXebIUrUjvo9kAo+6HYL GutjwuG+G2FpBEiT+rdRU/Vcm4A71IKrpWNRFGnHsvv8ysfYfQDJgLENdQf/7JtB6H38 MfI9XDLwxh9QUjgOX+lNZW10hho3sSUZ46ziIDYHhsWCiLIYdg0InZD7yvZGSM1esxvw 4FJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=UNuyx8K2ySiAV6ZnRT9lDE7gCUj8cZTtuvFWAoUAsDg=; b=n1YMCV5xnH/JDV3F7rGWtJT2j7Qcg+h447hBgZtFiYn/m9mbaxHROu8+V7PQ0gKsst agHC1ZQETCJ2PQ1vD7XOCib6kEH5afPZyljl1a8tQuC8NNzp7i2mh5ebDViiqR3P/iTH 0COLgc8aeupGVPVLkWgpwlOb/vBNcdtZpDCrqu0UkYhsai4/+3MLPRQhptNly2nf3+6d ijx+QoWCcwgUhfDmvlKKKjdZQ9JmUznFf38KvemRsnEqytgyncz2ybXVyyoq6iWp59Fb iJ83iEwIRjINHSt+fmqk4tBeVZHPLS9R/rOI0CWRs5p7Wa+ORfUrTIBMYSHlmRr2jkVz rz7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fuh4WPSm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 19-20020a17090600d300b008785a45b64esi5801195eji.921.2023.02.01.11.58.40; Wed, 01 Feb 2023 11:59:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fuh4WPSm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231895AbjBATxW (ORCPT + 99 others); Wed, 1 Feb 2023 14:53:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229551AbjBATxU (ORCPT ); Wed, 1 Feb 2023 14:53:20 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D874125A0 for ; Wed, 1 Feb 2023 11:52:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281152; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=UNuyx8K2ySiAV6ZnRT9lDE7gCUj8cZTtuvFWAoUAsDg=; b=fuh4WPSmO2zZKy24bvQPxMb7NES03h2rbC2bjy2RUmZLJAdtLtAY6GsWZ3S2DzcIosFWb+ /EqYVyJRIOkjzfbp5Jrv0B/E6eTVSxF6SJIHGd8CzWXmOQqjrRmbiYwFbC2/Yt3UZZbfNS kyypSu48Syfjf5EVKSMc8Bc2j+bpaCk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-594--FZ7uIvhM-eLBBPBAMydmw-1; Wed, 01 Feb 2023 14:52:30 -0500 X-MC-Unique: -FZ7uIvhM-eLBBPBAMydmw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 497DE386B751; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E39C42166B34; Wed, 1 Feb 2023 19:52:29 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 12F5E403C47C6; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.460373427@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:16 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 3/5] mm/vmstat: use cmpxchg loop in cpu_vm_stats_fold References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756660020358603884?= X-GMAIL-MSGID: =?utf-8?q?1756660020358603884?= In preparation to switch vmstat shepherd to flush per-CPU counters remotely, use a cmpxchg loop instead of a pair of read/write instructions. Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -885,7 +885,7 @@ static int refresh_cpu_vm_stats(void) } /* - * Fold the data for an offline cpu into the global array. + * Fold the data for a cpu into the global array. * There cannot be any access by the offline cpu and therefore * synchronization is simplified. */ @@ -906,8 +906,9 @@ void cpu_vm_stats_fold(int cpu) if (pzstats->vm_stat_diff[i]) { int v; - v = pzstats->vm_stat_diff[i]; - pzstats->vm_stat_diff[i] = 0; + do { + v = pzstats->vm_stat_diff[i]; + } while (cmpxchg(&pzstats->vm_stat_diff[i], v, 0) != v); atomic_long_add(v, &zone->vm_stat[i]); global_zone_diff[i] += v; } @@ -917,8 +918,9 @@ void cpu_vm_stats_fold(int cpu) if (pzstats->vm_numa_event[i]) { unsigned long v; - v = pzstats->vm_numa_event[i]; - pzstats->vm_numa_event[i] = 0; + do { + v = pzstats->vm_numa_event[i]; + } while (cmpxchg(&pzstats->vm_numa_event[i], v, 0) != v); zone_numa_event_add(v, zone, i); } } @@ -934,8 +936,9 @@ void cpu_vm_stats_fold(int cpu) if (p->vm_node_stat_diff[i]) { int v; - v = p->vm_node_stat_diff[i]; - p->vm_node_stat_diff[i] = 0; + do { + v = p->vm_node_stat_diff[i]; + } while (cmpxchg(&p->vm_node_stat_diff[i], v, 0) != v); atomic_long_add(v, &pgdat->vm_stat[i]); global_node_diff[i] += v; } From patchwork Wed Feb 1 19:50:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 51544 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp480453wrn; Wed, 1 Feb 2023 11:59:24 -0800 (PST) X-Google-Smtp-Source: AK7set+rG7/+9l2I+p4nBvIx10tAs5lvqKxg7fuuZPcPvWFDps713l/q9rPyKI6m6ayQb082AtYE X-Received: by 2002:aa7:cd85:0:b0:4a2:48fb:641f with SMTP id x5-20020aa7cd85000000b004a248fb641fmr98933edv.15.1675281564439; Wed, 01 Feb 2023 11:59:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675281564; cv=none; d=google.com; s=arc-20160816; b=fc/PPhSutlS0DkrCtAURMY3MVVa7B1B7Tvs1h1cCR5yU3nRuHayRlJsEA6qqKPXBdx Y2O1E/IalX3JfzAVihAHKxkQsypxW5wrmEtNusGdAyFXwryz2zYPmP//kFF2OMcvA+3I D24AWyC+/sgiwzd3ylzddFJoZuchnBeNCZmn9r8oOrltsdTLmqZ7yzVwqRUbGW3IuAxx NroE/laCKenp3DU2FlZGFKl3FnYNm5AYn3Mpscz95EI2Ia26FftcoWsItYI/znRPf+dZ LkO+dHwQrW4YpDgttx5+Bh51zdKnyTK/7LAQd7NtE1tDSwt+2KN0yPEIjZbzke08Hj8p qViQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=5/RVJiIAz9b+9GlCe0yao1cdf4Ljv2ahozjVsA9E+ck=; b=lzJTRsC/dUtPMhpRH/r2oKF3lClLWgwPQGhUq/L/A4w6yOIHra1x0oxm35LfqCVEE9 oDmSWqtIRfzmjtYwLaE/0K+XKYsA8FzYgqSnLFfpHmUC3OKfER8ye6eXqYM31Xq7dgbn oLz/DIZtar6IyzKguvcBFIsnNpWu0BDuFOAmLFdvs3heGcu1w+rrrQK31vcFZ3YLV5Ga LBiSJImsXZ3hM+QYqCZ2/Rgnyk/5gQf8VajI0MSFLznHqhX5vhh7tS5DlpKW/myydP6S AnRMegtKGT4RkmJlKn8B1MAEj1qmJO5ey6kHYPwx2gMq+oc1fu6nZ59cHiJwxrDc3XgK wZzA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BDQsxnKP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r6-20020a50aac6000000b0049f1fc7bdbcsi23561749edc.624.2023.02.01.11.59.00; Wed, 01 Feb 2023 11:59:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BDQsxnKP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231917AbjBATx1 (ORCPT + 99 others); Wed, 1 Feb 2023 14:53:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231791AbjBATxV (ORCPT ); Wed, 1 Feb 2023 14:53:21 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0FF280160 for ; Wed, 1 Feb 2023 11:52:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=5/RVJiIAz9b+9GlCe0yao1cdf4Ljv2ahozjVsA9E+ck=; b=BDQsxnKP8AoJD8cqa7pHVz1tXMfTXSPLo9tB0ct3KfF0WKT8jAkyEcqAjPQZxXL3G7LlTZ /EupkvZeFsK/BqCEtbUQmmsqa5nvyWvkjsLXNE2StL+W7HIDw5rKb8aPwvWmovMDUc3Zgl jrqfNSdyv3XAlNH9z5XyOpWdkVdh7L8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-664-incrzUbIOv2YRg8uK3EuQw-1; Wed, 01 Feb 2023 14:52:30 -0500 X-MC-Unique: incrzUbIOv2YRg8uK3EuQw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 498A32806053; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 059CF140EBF4; Wed, 1 Feb 2023 19:52:30 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 16264403C47C7; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.484635830@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:17 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 4/5] mm/vmstat: switch vmstat shepherd to flush per-CPU counters remotely References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756660041647192599?= X-GMAIL-MSGID: =?utf-8?q?1756660041647192599?= Now that the counters are modified via cmpxchg both CPU locally (via the account functions), and remotely (via cpu_vm_stats_fold), its possible to switch vmstat_shepherd to perform the per-CPU vmstats folding remotely. This fixes the following two problems: 1. A customer provided some evidence which indicates that the idle tick was stopped; albeit, CPU-specific vmstat counters still remained populated. Thus one can only assume quiet_vmstat() was not invoked on return to the idle loop. If I understand correctly, I suspect this divergence might erroneously prevent a reclaim attempt by kswapd. If the number of zone specific free pages are below their per-cpu drift value then zone_page_state_snapshot() is used to compute a more accurate view of the aforementioned statistic. Thus any task blocked on the NUMA node specific pfmemalloc_wait queue will be unable to make significant progress via direct reclaim unless it is killed after being woken up by kswapd (see throttle_direct_reclaim()) 2. With a SCHED_FIFO task that busy loops on a given CPU, and kworker for that CPU at SCHED_OTHER priority, queuing work to sync per-vmstats will either cause that work to never execute, or stalld (i.e. stall daemon) boosts kworker priority which causes a latency violation Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -2007,6 +2007,23 @@ static void vmstat_shepherd(struct work_ static DECLARE_DEFERRABLE_WORK(shepherd, vmstat_shepherd); +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +/* Flush counters remotely if CPU uses cmpxchg to update its per-CPU counters */ +static void vmstat_shepherd(struct work_struct *w) +{ + int cpu; + + cpus_read_lock(); + for_each_online_cpu(cpu) { + cpu_vm_stats_fold(cpu); + cond_resched(); + } + cpus_read_unlock(); + + schedule_delayed_work(&shepherd, + round_jiffies_relative(sysctl_stat_interval)); +} +#else static void vmstat_shepherd(struct work_struct *w) { int cpu; @@ -2026,6 +2043,7 @@ static void vmstat_shepherd(struct work_ schedule_delayed_work(&shepherd, round_jiffies_relative(sysctl_stat_interval)); } +#endif static void __init start_shepherd_timer(void) { From patchwork Wed Feb 1 19:50:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 51546 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp482850wrn; Wed, 1 Feb 2023 12:03:27 -0800 (PST) X-Google-Smtp-Source: AK7set/5+dH3KnLeBxFaaVq1yR//gX1biQmv3FVYC7CXThdK/PpdoP6vlR9PM3MMXdqYP9uQGnCq X-Received: by 2002:a05:6402:2b8a:b0:499:d1ca:6d83 with SMTP id fj10-20020a0564022b8a00b00499d1ca6d83mr3412842edb.2.1675281807294; Wed, 01 Feb 2023 12:03:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675281807; cv=none; d=google.com; s=arc-20160816; b=nSxNo6jDTWpOp1a4/LS7gMtDmxp+XytVe9/gLHZKi9rATTzvY5pvZUQp1hpglVAmvy SbDeQmJB3hch2ohr/WDQYD0P95QYGNHLDA9u7YalihB1QXxq9Zk2eNGDbaSM/E83EON9 qlq4vyqaqMgAKcrUDyLUTvMTSulfpFOyx5BZU2KdGSWrPYdEtzC+kxqujUFk263AcB27 HC4GzQFwtVYYkmpMoPdevAQkt/Gt/u59IwqmxKwAxnyBSXhYLZXvXH3f9fvP4NK3xGrq sSLOMmwld5UDMURLUaePqsp+DioRJwNi2tbm1F6hJzsNVlcsoM4CGdSBeHKfBK3p10Ly ri1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=RfB/f1HZnnyt4KmfLrM33LwiKIajQ5La7psNOJtN5bk=; b=wIacy6sEeoxcvh1sBJC4vSV2E9gHYnUVYaeL2JrQvo0yA9AxTJFABXdGYColYG99+v 99xaZPfVEDvvTT7i+S5w7mvMH5OJxkWc9Xup4y2VTNqWHA/F0iR65ES94fMxBoLJ0bnd 24hz+nSpLBaFexzNSQ0al8gT2jrpW5f4PRkkZGE8oD4cPcWenZoyqyBkJopbIxofDx8c K1Bnd8bmNfeZ0koX3kolPR3wdc2Ax4P3COhXucd6bzJpWKC0oS582hFcZmr4QCHqOWO+ AzZt+9y4FFq62EZzP1HjwtSQEpDJgaWXOxbbOH9SKAFrmDGeG1EJHoT3sd/LZnr7U1F1 UkqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="eClg/Id8"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b8-20020a509f08000000b004a213b19ba2si19216893edf.1.2023.02.01.12.03.03; Wed, 01 Feb 2023 12:03:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="eClg/Id8"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231997AbjBATyJ (ORCPT + 99 others); Wed, 1 Feb 2023 14:54:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbjBATyH (ORCPT ); Wed, 1 Feb 2023 14:54:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBABC81B3E for ; Wed, 1 Feb 2023 11:52:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675281158; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=RfB/f1HZnnyt4KmfLrM33LwiKIajQ5La7psNOJtN5bk=; b=eClg/Id8/IXSXoB8yHtcRuRiVwhS6/VzCMlYNWAqEi9VYHA8Zu3nXiL0PBlNGXt3y5o+gL bWb0K0OiMNYiyjPoXUH6JX9LKEIc62IKWAE3CCWSPxz52+sgOAUBa65QTFOSO0mKooILph EwtFJhDAUr80GkRB3p64JyhyM/GPtcs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-185-B_uXfheIP_evsOcpf2nA9A-1; Wed, 01 Feb 2023 14:52:32 -0500 X-MC-Unique: B_uXfheIP_evsOcpf2nA9A-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2E4B5800186; Wed, 1 Feb 2023 19:52:32 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E6744112132C; Wed, 1 Feb 2023 19:52:31 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 1B70D403C47C9; Wed, 1 Feb 2023 16:51:49 -0300 (-03) Message-ID: <20230201195104.507817318@redhat.com> User-Agent: quilt/0.67 Date: Wed, 01 Feb 2023 16:50:18 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Marcelo Tosatti Subject: [PATCH 5/5] mm/vmstat: refresh stats remotely instead of via work item References: <20230201195013.881721887@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756660296054425962?= X-GMAIL-MSGID: =?utf-8?q?1756660296054425962?= Refresh per-CPU stats remotely, instead of queueing work items, for the stat_refresh procfs method. This fixes sosreport hang (which uses vmstat_refresh) with spinning SCHED_FIFO process. Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -1865,11 +1865,21 @@ static DEFINE_PER_CPU(struct delayed_wor int sysctl_stat_interval __read_mostly = HZ; #ifdef CONFIG_PROC_FS + +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +static int refresh_all_vm_stats(void); +#else static void refresh_vm_stats(struct work_struct *work) { refresh_cpu_vm_stats(); } +static int refresh_all_vm_stats(void) +{ + return schedule_on_each_cpu(refresh_vm_stats); +} +#endif + int vmstat_refresh(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { @@ -1889,7 +1899,7 @@ int vmstat_refresh(struct ctl_table *tab * transiently negative values, report an error here if any of * the stats is negative, so we know to go looking for imbalance. */ - err = schedule_on_each_cpu(refresh_vm_stats); + err = refresh_all_vm_stats(); if (err) return err; for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { @@ -2009,7 +2019,7 @@ static DECLARE_DEFERRABLE_WORK(shepherd, #ifdef CONFIG_HAVE_CMPXCHG_LOCAL /* Flush counters remotely if CPU uses cmpxchg to update its per-CPU counters */ -static void vmstat_shepherd(struct work_struct *w) +static int refresh_all_vm_stats(void) { int cpu; @@ -2019,7 +2029,12 @@ static void vmstat_shepherd(struct work_ cond_resched(); } cpus_read_unlock(); + return 0; +} +static void vmstat_shepherd(struct work_struct *w) +{ + refresh_all_vm_stats(); schedule_delayed_work(&shepherd, round_jiffies_relative(sysctl_stat_interval)); }