From patchwork Tue Oct 24 12:56:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 157412 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1918603vqx; Tue, 24 Oct 2023 05:57:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHcaROhD5j3qvVlJIsoq7fqHaDyM+tTpZ53BXmS/xHLsVudWf+qvnHo6Tvoy+4SH6A3Ucmh X-Received: by 2002:a05:6870:1585:b0:1d5:adc0:4a1 with SMTP id j5-20020a056870158500b001d5adc004a1mr13201447oab.22.1698152222051; Tue, 24 Oct 2023 05:57:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698152222; cv=none; d=google.com; s=arc-20160816; b=N+1fgx4ydVsjNMZql+ckWHpvFh9a8Ti6y9bYksAkJxPcUgQCQh7VdCfM9rbLi6BPSZ KVJnthG0k7PkXyagL5WG0xatpAJXZhMTwz+yljWV5bmkGKVv1mQcDeYwrAF88b5nto7C pG23Bg0+Rv85jEdlIi8DSxbRaFawP74EWCeakx+6LGUMf/7EbZFlTHN9Y8C3Gf1uThfS pKKlxMMlOwIlr/nSDDTLYe4a8N16iN+1mlVjmdCZ5BaYEQpUCtmxHWJ1adTRPKFmdg11 ekVyu1yI9kbOy/T0Cw/Sp3fSa3Kwmka0naTNZjNPPpCpLuXf/zIDEt7PwXB0bhgNkxy/ Vd+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=1hKI1M3lU5Y+7x/LGbv1N2Zep6rJt5Yw5SvegI+zkrM=; fh=sAEoGJfZHcO6gsPEmptsU8HBOVwHv9j2Vj8J8WZs/90=; b=fRdYvCRN0r4I+eII4202Qnf4UfKhPb7nagH306wSN5T7XYXGhjetjqD6R9syYP4K/z WtXVy66hiM2WeQW+OCu5C0+cwi+71MfZnnf8vNI4V8urITkmTngiEmYmMKBLaU5jSxWy JDjCuxz/0lv/uicMUFS420CnAYj4RHGn9ipdfv7YJVVCo0XTMKWrz01usPBGHxSG0k9f yKkHZOcz3WIfK4Fi7pbSDaxlpJEmkY+jrmUqtkODiDbD+w/xsTWqYIu4O785qOQNgTH+ NNev2lSxk2MwvTVV1PPHee6aU+a7cjP4QQ+WR4CAkkrxCpR1TWGP/0BVDKxAJzWy0i/u 8HmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id j187-20020a638bc4000000b00578a7f5a0afsi8451416pge.357.2023.10.24.05.57.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 05:57:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 5AC7E802F189; Tue, 24 Oct 2023 05:57:01 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234453AbjJXM4y (ORCPT + 26 others); Tue, 24 Oct 2023 08:56:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234382AbjJXM4v (ORCPT ); Tue, 24 Oct 2023 08:56:51 -0400 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09638D7D for ; Tue, 24 Oct 2023 05:56:46 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VurM0Hm_1698152201; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VurM0Hm_1698152201) by smtp.aliyun-inc.com; Tue, 24 Oct 2023 20:56:42 +0800 From: Baolin Wang To: catalin.marinas@arm.com, will@kernel.org Cc: akpm@linux-foundation.org, v-songbaohua@oppo.com, yuzhao@google.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH] arm64: mm: drop tlb flush operation when clearing the access bit Date: Tue, 24 Oct 2023 20:56:35 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 24 Oct 2023 05:57:01 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780641664154869979 X-GMAIL-MSGID: 1780641664154869979 Now ptep_clear_flush_young() is only called by folio_referenced() to check if the folio was referenced, and now it will call a tlb flush on ARM64 architecture. However the tlb flush can be expensive on ARM64 servers, especially for the systems with a large CPU numbers. Similar to the x86 architecture, below comments also apply equally to ARM64 architecture. So we can drop the tlb flush operation in ptep_clear_flush_young() on ARM64 architecture to improve the performance. " /* Clearing the accessed bit without a TLB flush * doesn't cause data corruption. [ It could cause incorrect * page aging and the (mistaken) reclaim of hot pages, but the * chance of that should be relatively low. ] * * So as a performance optimization don't flush the TLB when * clearing the accessed bit, it will eventually be flushed by * a context switch or a VM operation anyway. [ In the rare * event of it not getting flushed for a long time the delay * shouldn't really matter because there's no real memory * pressure for swapout to react to. ] */ " Running the thpscale to show some obvious improvements for compaction latency with this patch: base patched Amean fault-both-1 1093.19 ( 0.00%) 1084.57 * 0.79%* Amean fault-both-3 2566.22 ( 0.00%) 2228.45 * 13.16%* Amean fault-both-5 3591.22 ( 0.00%) 3146.73 * 12.38%* Amean fault-both-7 4157.26 ( 0.00%) 4113.67 * 1.05%* Amean fault-both-12 6184.79 ( 0.00%) 5218.70 * 15.62%* Amean fault-both-18 9103.70 ( 0.00%) 7739.71 * 14.98%* Amean fault-both-24 12341.73 ( 0.00%) 10684.23 * 13.43%* Amean fault-both-30 15519.00 ( 0.00%) 13695.14 * 11.75%* Amean fault-both-32 16189.15 ( 0.00%) 14365.73 * 11.26%* base patched Duration User 167.78 161.03 Duration System 1836.66 1673.01 Duration Elapsed 2074.58 2059.75 Barry Song submitted a similar patch [1] before, that replaces the ptep_clear_flush_young_notify() with ptep_clear_young_notify() in folio_referenced_one(). However, I'm not sure if removing the tlb flush operation is applicable to every architecture in kernel, so dropping the tlb flush for ARM64 seems a sensible change. Note: I am okay for both approach, if someone can help to ensure that all architectures do not need the tlb flush when clearing the accessed bit, then I also think Barry's patch is better (hope Barry can resend his patch). [1] https://lore.kernel.org/lkml/20220617070555.344368-1-21cnbao@gmail.com/ Signed-off-by: Baolin Wang --- arch/arm64/include/asm/pgtable.h | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0bd18de9fd97..2979d796ba9d 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, static inline int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { - int young = ptep_test_and_clear_young(vma, address, ptep); - - if (young) { - /* - * We can elide the trailing DSB here since the worst that can - * happen is that a CPU continues to use the young entry in its - * TLB and we mistakenly reclaim the associated page. The - * window for such an event is bounded by the next - * context-switch, which provides a DSB to complete the TLB - * invalidation. - */ - flush_tlb_page_nosync(vma, address); - } - - return young; + /* + * This comment is borrowed from x86, but applies equally to ARM64: + * + * Clearing the accessed bit without a TLB flush doesn't cause + * data corruption. [ It could cause incorrect page aging and + * the (mistaken) reclaim of hot pages, but the chance of that + * should be relatively low. ] + * + * So as a performance optimization don't flush the TLB when + * clearing the accessed bit, it will eventually be flushed by + * a context switch or a VM operation anyway. [ In the rare + * event of it not getting flushed for a long time the delay + * shouldn't really matter because there's no real memory + * pressure for swapout to react to. ] + */ + return ptep_test_and_clear_young(vma, address, ptep); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE