[v2,2/2] net, kasan: sample tagging of skb allocations with HW_TAGS

Message ID 7bf26d03fab8d99cdeea165990e9f2cf054b77d6.1669489329.git.andreyknvl@google.com
State New
Headers
Series [v2,1/2] kasan: allow sampling page_alloc allocations for HW_TAGS |

Commit Message

andrey.konovalov@linux.dev Nov. 26, 2022, 7:12 p.m. UTC
  From: Andrey Konovalov <andreyknvl@google.com>

As skb page_alloc allocations tend to be big, tagging and checking all
such allocations with Hardware Tag-Based KASAN introduces a significant
slowdown in testing scenarios that extensively use the network. This is
undesirable, as Hardware Tag-Based KASAN is intended to be used in
production and thus its performance impact is crucial.

Use __GFP_KASAN_SAMPLE flag for skb page_alloc allocations to make KASAN
use sampling and tag only some of these allocations.

When running a local loopback test on a testing MTE-enabled device in sync
mode, enabling Hardware Tag-Based KASAN intoduces a 50% slowdown. Applying
this patch and setting kasan.page_alloc.sampling to a value higher than 1
allows to lower the slowdown. The performance improvement saturates around
the sampling interval value of 10, which lowers the slowdown to 20%. The
slowdown in real scenarios will likely be better.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 net/core/skbuff.c | 4 ++--
 net/core/sock.c   | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)
  

Patch

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 88fa40571d0c..fdea87deee13 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -6135,8 +6135,8 @@  struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
 		while (order) {
 			if (npages >= 1 << order) {
 				page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) |
-						   __GFP_COMP |
-						   __GFP_NOWARN,
+						   __GFP_COMP | __GFP_NOWARN |
+						   __GFP_KASAN_SAMPLE,
 						   order);
 				if (page)
 					goto fill_page;
diff --git a/net/core/sock.c b/net/core/sock.c
index a3ba0358c77c..f7d20070ad88 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2842,7 +2842,7 @@  bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp)
 		/* Avoid direct reclaim but allow kswapd to wake */
 		pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |
 					  __GFP_COMP | __GFP_NOWARN |
-					  __GFP_NORETRY,
+					  __GFP_NORETRY | __GFP_KASAN_SAMPLE,
 					  SKB_FRAG_PAGE_ORDER);
 		if (likely(pfrag->page)) {
 			pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;