From patchwork Sat Nov 26 19:12:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 26288 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp4787237wrr; Sat, 26 Nov 2022 11:17:33 -0800 (PST) X-Google-Smtp-Source: AA0mqf7hly/fnj4y33zyHytm0SQnJdFZs+euoK1XmzMWNKpXFVMJv2MUx9iv0Cx9vjUrO1Ttu2lz X-Received: by 2002:a17:906:2693:b0:7aa:57c3:3f26 with SMTP id t19-20020a170906269300b007aa57c33f26mr21238368ejc.195.1669490252824; Sat, 26 Nov 2022 11:17:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669490252; cv=none; d=google.com; s=arc-20160816; b=SeNbN2XZlEphqz+VD9GM8B091KzZssrllOansT4gFTVxjZGfuz1xmekAoCiqBBWCqC nrFQbervwozrT20WSFL+u+mZFPHlgz/Tl4BkOBsYz++b9N5mCvuwM7mgUTU8759duNo5 PwF3HwRpNqId1OCQGZk40GOT79m7Ww/JoHe81RY+NaSrcDcnvBvx6Ry9S5mNPFoJdxe8 qFVfnbIbKKG5SydMA5i4dxLbG6qEJfdUvfDFEkx+6Z61wFcOT7Oqq/hjGtORT0P1eDBN qLn7pAvG0KKu6LQpEw/U08VcF1CKWjsU6IFGBsUEsKia2AfqTFbJ1e+9yGni7434BAJn +p+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Mld0Zf5BHHuTHmAamXf8kUapqnIfndf6LF73jgEUyOU=; b=DqmawbYrTe1XOm2nWrYsNyjipLqEpN7iWXwVhSSIxckoLnQXrAU8vKqtTbn7qnkCvU 8K2Q5sGMKwazy0lCKAlsqg4Z9IkeGdwIonRWkoCOV9vAkVE50EfRB+47nWhmCdNSEdIz 242u21Ql7EtsSF6pLXCGwrrUSpwK65GWtIlp9i3mD53Dhts/AoC0PuDCWCxZZUeRTUeC AZCXWZB9hrHJ5+mylceOq/H8FVY3ZPZZgOs59T+F2sBAMFs+nAWZD9r9lT4jmCswH5WD 01Emol/mUqK+R22ZddQt7xxwkr+BOLhGvLHKLX1+Or5rWSYSgxcvsC0GC3VgARzSa/0D /PDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=JRT26BDv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u17-20020a056402111100b00467bbc3d62asi5599547edv.547.2022.11.26.11.17.06; Sat, 26 Nov 2022 11:17:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=JRT26BDv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229602AbiKZTMW (ORCPT + 99 others); Sat, 26 Nov 2022 14:12:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229504AbiKZTMS (ORCPT ); Sat, 26 Nov 2022 14:12:18 -0500 X-Greylist: delayed 3419 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Sat, 26 Nov 2022 11:12:17 PST Received: from out-122.mta0.migadu.com (out-122.mta0.migadu.com [91.218.175.122]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA6A617890 for ; Sat, 26 Nov 2022 11:12:17 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1669489935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Mld0Zf5BHHuTHmAamXf8kUapqnIfndf6LF73jgEUyOU=; b=JRT26BDvFyqYcCreQlbv+eRunjXarfdLgSQrSIN61868nvVAbKFfzCAPXP1pu6+7vfuQdd WyQDQUFYp/RCMRU6uZHcxzmr1bgQox4v5SK5bCluwdnbKPLCHupa2a077vEk/OaK8sx4/z /EOB1zmujUK1lo/QM2FLlP61uFm+QVU= From: andrey.konovalov@linux.dev To: Marco Elver , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Peter Collingbourne , Evgenii Stepanov , Florian Mayer , Jann Horn , Mark Brand , netdev@vger.kernel.org, Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH v2 2/2] net, kasan: sample tagging of skb allocations with HW_TAGS Date: Sat, 26 Nov 2022 20:12:13 +0100 Message-Id: <7bf26d03fab8d99cdeea165990e9f2cf054b77d6.1669489329.git.andreyknvl@google.com> In-Reply-To: <4c341c5609ed09ad6d52f937eeec28d142ff1f46.1669489329.git.andreyknvl@google.com> References: <4c341c5609ed09ad6d52f937eeec28d142ff1f46.1669489329.git.andreyknvl@google.com> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750587411596140903?= X-GMAIL-MSGID: =?utf-8?q?1750587411596140903?= From: Andrey Konovalov As skb page_alloc allocations tend to be big, tagging and checking all such allocations with Hardware Tag-Based KASAN introduces a significant slowdown in testing scenarios that extensively use the network. This is undesirable, as Hardware Tag-Based KASAN is intended to be used in production and thus its performance impact is crucial. Use __GFP_KASAN_SAMPLE flag for skb page_alloc allocations to make KASAN use sampling and tag only some of these allocations. When running a local loopback test on a testing MTE-enabled device in sync mode, enabling Hardware Tag-Based KASAN intoduces a 50% slowdown. Applying this patch and setting kasan.page_alloc.sampling to a value higher than 1 allows to lower the slowdown. The performance improvement saturates around the sampling interval value of 10, which lowers the slowdown to 20%. The slowdown in real scenarios will likely be better. Signed-off-by: Andrey Konovalov --- net/core/skbuff.c | 4 ++-- net/core/sock.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 88fa40571d0c..fdea87deee13 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6135,8 +6135,8 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len, while (order) { if (npages >= 1 << order) { page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) | - __GFP_COMP | - __GFP_NOWARN, + __GFP_COMP | __GFP_NOWARN | + __GFP_KASAN_SAMPLE, order); if (page) goto fill_page; diff --git a/net/core/sock.c b/net/core/sock.c index a3ba0358c77c..f7d20070ad88 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2842,7 +2842,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_KASAN_SAMPLE, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;