Message ID | 20230320180745.556821285@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp1374214wrt; Mon, 20 Mar 2023 11:41:14 -0700 (PDT) X-Google-Smtp-Source: AK7set+qZIHztjZQerE8M9cbzvOe2FrTg/6+hOOLWn5sx9VXK3tNckLAVVcZE5b1Ag6z+Av1EW6D X-Received: by 2002:a62:8488:0:b0:626:41b:259f with SMTP id k130-20020a628488000000b00626041b259fmr314344pfd.17.1679337674047; Mon, 20 Mar 2023 11:41:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679337674; cv=none; d=google.com; s=arc-20160816; b=haTtFQLHwU3HOWPCYruwHXM15EgBVMbXIRuXdYlkxk2ja/cI+8p7B/eiX+n7htOAr+ qhSF7io1hggW52NuW1vC2bHnKF5jT1UPMqrJINAt4/d8eMKmi7MqZ3FqG4XqiSbw7NQ3 KqLYvHhLErg4jWMU31bTZOq1wQAfGkE/5qzKkULSjlvDXtZdy2O9jg2HW+oJbYTJvy8X rBHkWcwI4hajxAWkeMmpdiY24JyuHZ1AphruvjDyidHM+ZXxk8+wt7T7BNxrSjHN7Qjg oZJ0oS8eKuXCWnmTR2EF96CIba5Koq4rIAOCiUFh/hsgAIwqEaiQ3ECeCBGe2EyAH0V4 O2FA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=nxDgSxDt58miOCbO29kj6Q7ncA44wbvUWM1rbcl0z7M=; b=myGgjOBxBBKIrmLQWtl+wJZoJ8zC9LQgjLrOTUeNBIgcbWuuMcJwkFUnSgFgkkdOnq DKRpafYMbcfRCBgu2hmeeyT9HijJI6YUchcKxlNDqfxKMDEK5cy8jXgxOWp14lBtQXnI i7F0d++ph3L9Aa4MjZi1OoXqISyiBfqFXLh/kMEl5gUB/c8vPjQwvDi2VXTLRZ6e2zsb qdiBqme0gYWWcqdipXSDaM6jHexGLnfOsfnyM+uy9QmA2P940DOi2HvWWR1JGgYwD791 juOEjlyprFEfMRSHsSrCUclBop3d+OVrU/4zvG5cB8OgarKIC7DJw/ZDgsWOMEGLQ/zP LvBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BAzw++fd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f6-20020a056a00228600b00625f32c9b86si12207062pfe.33.2023.03.20.11.40.59; Mon, 20 Mar 2023 11:41:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BAzw++fd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230492AbjCTSUu (ORCPT <rfc822;pusanteemu@gmail.com> + 99 others); Mon, 20 Mar 2023 14:20:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231300AbjCTST4 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 20 Mar 2023 14:19:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DD6742BF0 for <linux-kernel@vger.kernel.org>; Mon, 20 Mar 2023 11:12:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1679335931; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=nxDgSxDt58miOCbO29kj6Q7ncA44wbvUWM1rbcl0z7M=; b=BAzw++fdFu70dfME3EHBQJVSXSIvPN5R16L6gNfcVE7ECR1SipbC8fjIdPUrishAloRR5I g9G5CS8LB65WW+Gfxda2Vyxdc3zK072Q+66yWO6xF9mrAqOvH2iGLvJVqxccm/mOs8Tmh/ rQYMlddGgNfkzOZN9X59AgYHY+tJdR8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-138--rX6rrAwMOqsYpUa1YzcdA-1; Mon, 20 Mar 2023 14:12:05 -0400 X-MC-Unique: -rX6rrAwMOqsYpUa1YzcdA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2B9671C08786; Mon, 20 Mar 2023 18:12:05 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EFD072027062; Mon, 20 Mar 2023 18:12:04 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id D1AD6403BC89A; Mon, 20 Mar 2023 15:08:02 -0300 (-03) Message-ID: <20230320180745.556821285@redhat.com> User-Agent: quilt/0.67 Date: Mon, 20 Mar 2023 15:03:33 -0300 From: Marcelo Tosatti <mtosatti@redhat.com> To: Christoph Lameter <cl@linux.com> Cc: Aaron Tomlin <atomlin@atomlin.com>, Frederic Weisbecker <frederic@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Russell King <linux@armlinux.org.uk>, Huacai Chen <chenhuacai@kernel.org>, Heiko Carstens <hca@linux.ibm.com>, x86@kernel.org, Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>, Marcelo Tosatti <mtosatti@redhat.com> Subject: [PATCH v7 01/13] vmstat: allow_direct_reclaim should use zone_page_state_snapshot References: <20230320180332.102837832@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760912972202229830?= X-GMAIL-MSGID: =?utf-8?q?1760913180829467746?= |
Series |
fold per-CPU vmstats remotely
|
|
Commit Message
Marcelo Tosatti
March 20, 2023, 6:03 p.m. UTC
A customer provided evidence indicating that a process
was stalled in direct reclaim:
- The process was trapped in throttle_direct_reclaim().
The function wait_event_killable() was called to wait condition
allow_direct_reclaim(pgdat) for current node to be true.
The allow_direct_reclaim(pgdat) examined the number of free pages
on the node by zone_page_state() which just returns value in
zone->vm_stat[NR_FREE_PAGES].
- On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.
However, the freelist on this node was not empty.
- This inconsistent of vmstat value was caused by percpu vmstat on
nohz_full cpus. Every increment/decrement of vmstat is performed
on percpu vmstat counter at first, then pooled diffs are cumulated
to the zone's vmstat counter in timely manner. However, on nohz_full
cpus (in case of this customer's system, 48 of 52 cpus) these pooled
diffs were not cumulated once the cpu had no event on it so that
the cpu started sleeping infinitely.
I checked percpu vmstat and found there were total 69 counts not
cumulated to the zone's vmstat counter yet.
- In this situation, kswapd did not help the trapped process.
In pgdat_balanced(), zone_wakermark_ok_safe() examined the number
of free pages on the node by zone_page_state_snapshot() which
checks pending counts on percpu vmstat.
Therefore kswapd could know there were 69 free pages correctly.
Since zone->_watermark = {8, 20, 32}, kswapd did not work because
69 was greater than 32 as high watermark.
Change allow_direct_reclaim to use zone_page_state_snapshot, which
allows a more precise version of the vmstat counters to be used.
allow_direct_reclaim will only be called from try_to_free_pages,
which is not a hot path.
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
---
Comments
On Mon, Mar 20, 2023 at 07:21:04PM +0100, Michal Hocko wrote: > On Mon 20-03-23 15:03:33, Marcelo Tosatti wrote: > > A customer provided evidence indicating that a process > > was stalled in direct reclaim: > > > > - The process was trapped in throttle_direct_reclaim(). > > The function wait_event_killable() was called to wait condition > > allow_direct_reclaim(pgdat) for current node to be true. > > The allow_direct_reclaim(pgdat) examined the number of free pages > > on the node by zone_page_state() which just returns value in > > zone->vm_stat[NR_FREE_PAGES]. > > > > - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. > > However, the freelist on this node was not empty. > > > > - This inconsistent of vmstat value was caused by percpu vmstat on > > nohz_full cpus. Every increment/decrement of vmstat is performed > > on percpu vmstat counter at first, then pooled diffs are cumulated > > to the zone's vmstat counter in timely manner. However, on nohz_full > > cpus (in case of this customer's system, 48 of 52 cpus) these pooled > > diffs were not cumulated once the cpu had no event on it so that > > the cpu started sleeping infinitely. > > I checked percpu vmstat and found there were total 69 counts not > > cumulated to the zone's vmstat counter yet. > > > > - In this situation, kswapd did not help the trapped process. > > In pgdat_balanced(), zone_wakermark_ok_safe() examined the number > > of free pages on the node by zone_page_state_snapshot() which > > checks pending counts on percpu vmstat. > > Therefore kswapd could know there were 69 free pages correctly. > > Since zone->_watermark = {8, 20, 32}, kswapd did not work because > > 69 was greater than 32 as high watermark. > > > > Change allow_direct_reclaim to use zone_page_state_snapshot, which > > allows a more precise version of the vmstat counters to be used. > > > > allow_direct_reclaim will only be called from try_to_free_pages, > > which is not a hot path. > > Have you managed to test this patch to confirm it addresses the above > issue? It should but better double check that. > > > Suggested-by: Michal Hocko <mhocko@suse.com> > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > The patch makes sense regardless but a note about testing should be > added. > > Acked-by: Michal Hocko <mhocko@suse.com> Michal, The patch has not been tested in the original setup where the problem was found, however i don't think its easy to do that validation (checking with the reporter anyway). Perhaps one could find a synthetic reproducer. It is pretty easy to note that, on an isolated nohz_full CPU, the deferrable timer that is queued on it (timer which should queue vmstat_update on that CPU) does not execute for long periods. This makes the global stats stale (since per-CPU free pages can become stale for as long as the CPU has tick processing stopped). Which matches the data available. Thanks!
On Mon 20-03-23 15:32:15, Marcelo Tosatti wrote: > On Mon, Mar 20, 2023 at 07:21:04PM +0100, Michal Hocko wrote: > > On Mon 20-03-23 15:03:33, Marcelo Tosatti wrote: > > > A customer provided evidence indicating that a process > > > was stalled in direct reclaim: > > > > > > - The process was trapped in throttle_direct_reclaim(). > > > The function wait_event_killable() was called to wait condition > > > allow_direct_reclaim(pgdat) for current node to be true. > > > The allow_direct_reclaim(pgdat) examined the number of free pages > > > on the node by zone_page_state() which just returns value in > > > zone->vm_stat[NR_FREE_PAGES]. > > > > > > - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. > > > However, the freelist on this node was not empty. > > > > > > - This inconsistent of vmstat value was caused by percpu vmstat on > > > nohz_full cpus. Every increment/decrement of vmstat is performed > > > on percpu vmstat counter at first, then pooled diffs are cumulated > > > to the zone's vmstat counter in timely manner. However, on nohz_full > > > cpus (in case of this customer's system, 48 of 52 cpus) these pooled > > > diffs were not cumulated once the cpu had no event on it so that > > > the cpu started sleeping infinitely. > > > I checked percpu vmstat and found there were total 69 counts not > > > cumulated to the zone's vmstat counter yet. > > > > > > - In this situation, kswapd did not help the trapped process. > > > In pgdat_balanced(), zone_wakermark_ok_safe() examined the number > > > of free pages on the node by zone_page_state_snapshot() which > > > checks pending counts on percpu vmstat. > > > Therefore kswapd could know there were 69 free pages correctly. > > > Since zone->_watermark = {8, 20, 32}, kswapd did not work because > > > 69 was greater than 32 as high watermark. > > > > > > Change allow_direct_reclaim to use zone_page_state_snapshot, which > > > allows a more precise version of the vmstat counters to be used. > > > > > > allow_direct_reclaim will only be called from try_to_free_pages, > > > which is not a hot path. > > > > Have you managed to test this patch to confirm it addresses the above > > issue? It should but better double check that. > > > > > Suggested-by: Michal Hocko <mhocko@suse.com> > > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > > > The patch makes sense regardless but a note about testing should be > > added. > > > > Acked-by: Michal Hocko <mhocko@suse.com> > > Michal, > > The patch has not been tested in the original setup where the problem > was found, however i don't think its easy to do that validation > (checking with the reporter anyway). This is a fair point and I would just add it to the changelog for the future reference.
Index: linux-vmstat-remote/mm/vmscan.c =================================================================== --- linux-vmstat-remote.orig/mm/vmscan.c +++ linux-vmstat-remote/mm/vmscan.c @@ -6861,7 +6861,7 @@ static bool allow_direct_reclaim(pg_data continue; pfmemalloc_reserve += min_wmark_pages(zone); - free_pages += zone_page_state(zone, NR_FREE_PAGES); + free_pages += zone_page_state_snapshot(zone, NR_FREE_PAGES); } /* If there are no reserves (unexpected config) then do not throttle */