Message ID | 20240220021804.9541-1-shijie@os.amperecomputing.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-72207-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2685:b0:108:e6aa:91d0 with SMTP id mn5csp150492dyc; Mon, 19 Feb 2024 18:18:59 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXhVm8KT+8jMEeYCJnaIakPQpo3xu4q6hc/IfNu+DMBurr8mk235G8BIJncoW925X4BuQMzt1TdZM8gK564QnyO+7yNXg== X-Google-Smtp-Source: AGHT+IHmvQICSnnGnCFbiz462w/Q0EjRh5PIkU8uKK3KUhtWES3tL2yMSGYldhain0blcS5Bd6pV X-Received: by 2002:a17:903:8ce:b0:1db:cca9:f751 with SMTP id lk14-20020a17090308ce00b001dbcca9f751mr7683618plb.59.1708395539501; Mon, 19 Feb 2024 18:18:59 -0800 (PST) Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id h18-20020a170902f55200b001d95cfd2bdesi5337707plf.150.2024.02.19.18.18.59 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Feb 2024 18:18:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-72207-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@os.amperecomputing.com header.s=selector2 header.b=iWcaLm8k; arc=fail (signature failed); spf=pass (google.com: domain of linux-kernel+bounces-72207-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-72207-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amperecomputing.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 5FAD8B21D0B for <ouuuleilei@gmail.com>; Tue, 20 Feb 2024 02:18:58 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 13383481B5; Tue, 20 Feb 2024 02:18:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=os.amperecomputing.com header.i=@os.amperecomputing.com header.b="iWcaLm8k" Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2129.outbound.protection.outlook.com [40.107.94.129]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DF451E48C; Tue, 20 Feb 2024 02:18:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.129 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708395518; cv=fail; b=lyA8QrtAuLXmADeC02ZLQKuIYbUPxMyut3y/GVUgrRY+iS3OJaL099uPopQ/jQ5VzM9RColhVG8ARP6WxFvErjpBafw3mVa8xgEvCJyxr5Z+7iBODwuAt5rxLN2Bm9ctShCxzFE1JNkWzeGkzBxzIcQ6hL/huPLda+1QrIYIPbo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708395518; c=relaxed/simple; bh=Kg3RMhdDRGq8pN7y+dhOJE1l9GRcNIuogslkw450Ia4=; h=From:To:Cc:Subject:Date:Message-Id:Content-Type:MIME-Version; b=b/73+mNLBe6iVmDUhDGhkNJ5Jk+8d6XuE3MguXksvclUx8plzUiCNyZCVZJ5WBzb3LXI6WOAcJfZl09D7/KCUtT4HIqoCcfW069CIN+M8d5cjGyhz+aQZf1eVD5iOmTCvr5ne6QkdAD9SRWwPSxvuMHFOz7ScTEPTgpjE5vhxrI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=os.amperecomputing.com; spf=pass smtp.mailfrom=os.amperecomputing.com; dkim=pass (1024-bit key) header.d=os.amperecomputing.com header.i=@os.amperecomputing.com header.b=iWcaLm8k; arc=fail smtp.client-ip=40.107.94.129 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=os.amperecomputing.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=os.amperecomputing.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FMphhUtuItihhgY9h7/2pTjZ7Y+UtJvtSWf8RNLwgUhXlC81AV8M14FN3ynA0onUaBt0SgiJn2g5lqf9MQ+7mvIHqElexxcBuPvvO2+WoBfYw/wqyyQm9yCdvHrp5cM6Ld4TdSpPwT/4VzfcYluICDAMM7Lo8ICMK6E1+pJhSBIEqmME8ZnseIbF2r/VCsx/R7FTQ8qQlGnNbmzqwYHb7dhTLzxa+c51acpz49RNZ80EAL8wusgnouZBjWXSQzYF1cqRX57H87UfgPp+/LK3cGOE3vNs0nvE3QxyGXCGG9Lthy1X76BJDxfsm9LJAGpyaYO3d4LSPDk5T4Fi99bA6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Yosa5kH4djbgmmtATx8TVkSw3+PBAF+Fgfyl59vGDpg=; b=kxlQ7q3HcHzZkfu/N42KrmpuG6Mzx4QQ9ouFyXd+neY/kmaCiXsOHl4sGFvQ5sGI2jpl/dLi7AzvoD0aKirTbpJI952FC9ceBKoe//+jlXACiJnCLb1b9vxrgL27tE4ywahAFTa5r6QlcdoZVC/l0FWfDhEzXsF3OemP/k79o99KZ5TT4TRpscU8Yjq1BWRNNa4IvIZbS/1cf5fdv+Ujei8U9OLcDQrXjM0Mn4gH0DRm+Mm1HVmXgItht6VCrRKYM84tqkTnzA/F5uFOPQJZZTw500SdRDnk/WfJyey18rl5chln4kNrDH6josHWCsRfYKdQuNNy1nfxLbbxJPw72g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=os.amperecomputing.com; dmarc=pass action=none header.from=os.amperecomputing.com; dkim=pass header.d=os.amperecomputing.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=os.amperecomputing.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yosa5kH4djbgmmtATx8TVkSw3+PBAF+Fgfyl59vGDpg=; b=iWcaLm8kDY3f1HdYNcThDzAzTxbCjm2jha+CMChEgFxbQTCdL7f+90JM3dCaLz6ZwEgu+36Ky4zypXiFjg+Z/saoMttSvTnE7rikR7rEe/+ksaCTMXzNG7U9b9zhzpny4u+sGyrLu5dyOfOpWRFuyruTZPHVaYgp1H9DNSwsVZ0= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=os.amperecomputing.com; Received: from PH0PR01MB7975.prod.exchangelabs.com (2603:10b6:510:26d::15) by MW4PR01MB6179.prod.exchangelabs.com (2603:10b6:303:67::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.37; Tue, 20 Feb 2024 02:18:33 +0000 Received: from PH0PR01MB7975.prod.exchangelabs.com ([fe80::91c:92f:45a5:e68a]) by PH0PR01MB7975.prod.exchangelabs.com ([fe80::91c:92f:45a5:e68a%6]) with mapi id 15.20.7292.029; Tue, 20 Feb 2024 02:18:32 +0000 From: Huang Shijie <shijie@os.amperecomputing.com> To: kuba@kernel.org Cc: patches@amperecomputing.com, davem@davemloft.net, horms@kernel.org, edumazet@google.com, ast@kernel.org, dhowells@redhat.com, linyunsheng@huawei.com, aleksander.lobakin@intel.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, cl@os.amperecomputing.com, Huang Shijie <shijie@os.amperecomputing.com> Subject: [PATCH] net: skbuff: allocate the fclone in the current NUMA node Date: Tue, 20 Feb 2024 10:18:04 +0800 Message-Id: <20240220021804.9541-1-shijie@os.amperecomputing.com> X-Mailer: git-send-email 2.40.1 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: CH0PR03CA0237.namprd03.prod.outlook.com (2603:10b6:610:e7::32) To PH0PR01MB7975.prod.exchangelabs.com (2603:10b6:510:26d::15) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH0PR01MB7975:EE_|MW4PR01MB6179:EE_ X-MS-Office365-Filtering-Correlation-Id: c6d04598-3016-4484-11a2-08dc31ba3c5f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /xPe4sdx4wH1Qep6a83tfwoBMbqoDKrvCR37XJ+N+FNESTOnpZUStTZrHD5y4vVDfbvEYjcz/ZUSP7udtOXH5x7bzeCZxyC9xj+Q2UwMVzZnvcZsXquYX5QUSb1kQg19NWeETs/YbrHmWQLoRTaqgmGhIG3TSKBGKw1sWR2o2ZGuO3aGHN9arQ2YOftPwf2dYzpqHNNE6xTywu3NXtglm2SWeeiv8BMCS8HNU1S3iBt+Ja5JbME3IhIANTvlNCsGAviww1MgnQmVncoxjnHSTZFbF7gxSAx/C7E8+5OHDU/1Zv5FULTo2C93I6pFfmp33PaE1V2+jX+9q2w3lroH99mIWQoWHV5FN8f/W6HXbTDE+/4FhLAtWx0Si3rDQhGTl1qzTF0TQoXLCPvNNMgzgUMiA7vGb3ROy9op+fiUbnpgm7rIj/kzTPFC7xfqSmzohKp8y9CfDRT4kh3kcMc/wafu2K9dV0iQGW8rzKF4TNPq680xdmG+fbS/AwBMQ/lZFJEs+7QNGRSNE6ssXSIuge46LeMnAPL9Qav+SPeiaSBPG68h1IAV51h78TILdetRc/pLzY+wDtG9+69o4rs5TAkgKUna6s0IOAuFf/2j4n3AGcuQI/oPb4JCO2Dhsaa7 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR01MB7975.prod.exchangelabs.com;PTR:;CAT:NONE;SFS:(13230031)(38350700005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: zACM/W6grIwbFC/FJaxXUnOzeUifkT4du3h15UZPpxqrjcHIaYqt563ifZTdxyla73MscHns6p9/+CXfefHnj0TV3+oRUqQv72Rg+VHh+Bi/zAXeNRr9peCTQjl9PoHhkqGs1Nl60ZU4LzzU5kpA5uxp+aiNkSyZaR1UIo9352mpg0enkUVR8UrD1cH8M1XZsbIMv9BsDSQi7ToYBX2xzPI4F2u0+czI09nrWHeuaha1L8sh+MkXLD0NfvBVLnQGdbspitdiwmTOe8mJelx2G94lkhH/GR/TE6jQHR2gfMQIRdZx28XGnFn0Y/INJ/R9yOXz/Jtrt6M5EY4IIhY5nEkXxHlU794dWJLMhinbJDHcP9kIFN54RQLRHQdu9bbym1m7oXbis/ZF1WhErRuanzLLZB7QUHdHTy1DEf27Y08pGzpc2Hr/kXplQ6y6KMqP7hO69AUzVd6jpXLWl6npok6gmBbzPE4aajclaPYGsxHbmGTHkyVlUYZ1TY8j76zgPSTdmKY6wDRv/LoVR3HmR0r6lKS+a8QY6+bFAgXf49LsRaslfcR/FVjS9q7PkuV0LHYS/K6zTRzbnWi1ZZg1Ef+03q7da+jlEmh/bcq+60WX4Q8DEw6ErcS7JtgQi6fWO0QQARo9KUYif3xWjLJ51oW9quhrPMB8QC55d7b2U1XKMISa8IqeTppYQpdxH0JiOWbxkmD1Od2rv4U3NFPFodW7lrG5/CMa1yyYIIfqyg4uXLBSN6FwkkleIjc+ed5kUs+AnDsCcR/W6FOpsswyt5Rkr5Oi1SYrNY8ss37I5FCF9/ILrrtn9DgMmcsBmI2+cBrebP4EnIV30VOzMzxZGlON1qoNTPjZp8UZ7ijthwejvgQ2dez8cx6JvFd2Gm/6niGMvZz969OSc2t05UfUoKspyqbEWAtKGDm3iJ3I1Ypnc+Dg0oP8tbnd7aRdaGlSckww4kUN+pm/0wsTMQ3awRw/hqtsnZkyMLzXtm7tLajjF4mSnIXYbJFZctzOnW9+uK/RyLKUskMXbq3a7Zx5KBmKKsYEnxIDwgv4eFQvkO4yTzNAYl0hfZF1rCojpMpYG4OrK/48nHfDCO1uI6wYFUcNxz1+LkrRpEh7hSEbQcLAGLm+88qu90tv95QuvlxW9y+rzJoTpi3lZ/pBnRVy2IbQkXuGk6jqZ9n4aev8Au+2zF06szoSDzIt3v4RtR4YeQFdzkUW0amSl+NW/yjsYNorKKRwfyjnG5FyhZaHUFawnuJnPzytQM8V6ZdwY1zVUjk+flk1i5HrXOKRiV6hfoK2Bp+yU1u8+sto1ypeEgqWWAjXPpN05leBXJ6UbI1yVVTBv7R9azLSAjISzfoHvq2mSjtnLBcJI75P0x+dBLjZKpDun7JBpVW1gP18Lyw93XxeRbK9ed/23y52e6+w73CC7q8S5xTwpz8IuBfoL3fp3kXfy9QIc5qNSpaclTQE9YPb5XF2S7FqUUA5vC/CFrWYB+lXGMlRFP+bfepa/7COGcXyi4mY9qe09FQH/+LS09AdHiShSMujd9Ix/AsqiCVTYuEEZy/LpN47UDpgmZ5KbXjSXNhRnalfmHowtHOxZh5kLZ1CIyK86nxNAXVMeg== X-OriginatorOrg: os.amperecomputing.com X-MS-Exchange-CrossTenant-Network-Message-Id: c6d04598-3016-4484-11a2-08dc31ba3c5f X-MS-Exchange-CrossTenant-AuthSource: PH0PR01MB7975.prod.exchangelabs.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Feb 2024 02:18:32.9164 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3bc2b170-fd94-476d-b0ce-4229bdc904a7 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: E69ykqQ7vAnjUe7N06L9pbMofuEdE8gzlwmONBjgEPNI9kH0bzGz0Eo0zqzay5b0lg2WSbxPHIzO/tUyu/NCNz2Y82+WlX/CqCraGz4d9lI2mhhXbxBV4aKpD5efGWna X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR01MB6179 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791382561600070921 X-GMAIL-MSGID: 1791382561600070921 |
Series |
net: skbuff: allocate the fclone in the current NUMA node
|
|
Commit Message
Huang Shijie
Feb. 20, 2024, 2:18 a.m. UTC
The current code passes NUMA_NO_NODE to __alloc_skb(), we found
it may creates fclone SKB in remote NUMA node.
So use numa_node_id() to limit the allocation to current NUMA node.
Signed-off-by: Huang Shijie <shijie@os.amperecomputing.com>
---
include/linux/skbuff.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie <shijie@os.amperecomputing.com> wrote: > > The current code passes NUMA_NO_NODE to __alloc_skb(), we found > it may creates fclone SKB in remote NUMA node. This is intended (WAI) What about the NUMA policies of the current thread ? Has NUMA_NO_NODE behavior changed recently? What means : "it may creates" ? Please be more specific. > > So use numa_node_id() to limit the allocation to current NUMA node. We prefer the allocation to succeed, instead of failing if the current NUMA node has no available memory. Please check: grep . /sys/devices/system/node/node*/numastat Are you going to change ~700 uses of NUMA_NO_NODE in the kernel ? Just curious. > > Signed-off-by: Huang Shijie <shijie@os.amperecomputing.com> > --- > include/linux/skbuff.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > index 2dde34c29203..ebc42b2604ad 100644 > --- a/include/linux/skbuff.h > +++ b/include/linux/skbuff.h > @@ -1343,7 +1343,7 @@ static inline bool skb_fclone_busy(const struct sock *sk, > static inline struct sk_buff *alloc_skb_fclone(unsigned int size, > gfp_t priority) > { > - return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, NUMA_NO_NODE); > + return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, numa_node_id()); > } > > struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src); > -- > 2.40.1 >
在 2024/2/20 13:32, Eric Dumazet 写道: > On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie > <shijie@os.amperecomputing.com> wrote: >> The current code passes NUMA_NO_NODE to __alloc_skb(), we found >> it may creates fclone SKB in remote NUMA node. > This is intended (WAI) Okay. thanks a lot. It seems I should fix the issue in other code, not the networking. > > What about the NUMA policies of the current thread ? We use "numactl -m 0" for memcached, the NUMA policy should allocate fclone in node 0, but we can see many fclones were allocated in node 1. We have enough memory to allocate these fclones in node 0. > > Has NUMA_NO_NODE behavior changed recently? I guess not. > > What means : "it may creates" ? Please be more specific. When we use the memcached for testing in NUMA, there are maybe 20% ~ 30% fclones were allocated in remote NUMA node. After this patch, all the fclones are allocated correctly. >> So use numa_node_id() to limit the allocation to current NUMA node. > We prefer the allocation to succeed, instead of failing if the current > NUMA node has no available memory. Got it. Thanks Huang Shijie > > Please check: > > grep . /sys/devices/system/node/node*/numastat > > Are you going to change ~700 uses of NUMA_NO_NODE in the kernel ? > > Just curious. > >> Signed-off-by: Huang Shijie <shijie@os.amperecomputing.com> >> --- >> include/linux/skbuff.h | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h >> index 2dde34c29203..ebc42b2604ad 100644 >> --- a/include/linux/skbuff.h >> +++ b/include/linux/skbuff.h >> @@ -1343,7 +1343,7 @@ static inline bool skb_fclone_busy(const struct sock *sk, >> static inline struct sk_buff *alloc_skb_fclone(unsigned int size, >> gfp_t priority) >> { >> - return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, NUMA_NO_NODE); >> + return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, numa_node_id()); >> } >> >> struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src); >> -- >> 2.40.1 >>
On Tue, Feb 20, 2024 at 7:26 AM Shijie Huang <shijie@amperemail.onmicrosoft.com> wrote: > > > 在 2024/2/20 13:32, Eric Dumazet 写道: > > On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie > > <shijie@os.amperecomputing.com> wrote: > >> The current code passes NUMA_NO_NODE to __alloc_skb(), we found > >> it may creates fclone SKB in remote NUMA node. > > This is intended (WAI) > > Okay. thanks a lot. > > It seems I should fix the issue in other code, not the networking. > > > > > What about the NUMA policies of the current thread ? > > We use "numactl -m 0" for memcached, the NUMA policy should allocate > fclone in > > node 0, but we can see many fclones were allocated in node 1. > > We have enough memory to allocate these fclones in node 0. > > > > > Has NUMA_NO_NODE behavior changed recently? > I guess not. > > > > What means : "it may creates" ? Please be more specific. > > When we use the memcached for testing in NUMA, there are maybe 20% ~ 30% > fclones were allocated in > > remote NUMA node. Interesting, how was it measured exactly ? Are you using SLUB or SLAB ? > > After this patch, all the fclones are allocated correctly. Note that skbs for TCP have three memory components (or more for large packets) sk_buff skb->head page frags (see sk_page_frag_refill() for non zero copy payload) The payload should be following NUMA policy of current thread, that is really what matters.
在 2024/2/20 16:17, Eric Dumazet 写道: > On Tue, Feb 20, 2024 at 7:26 AM Shijie Huang > <shijie@amperemail.onmicrosoft.com> wrote: >> >> 在 2024/2/20 13:32, Eric Dumazet 写道: >>> On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie >>> <shijie@os.amperecomputing.com> wrote: >>>> The current code passes NUMA_NO_NODE to __alloc_skb(), we found >>>> it may creates fclone SKB in remote NUMA node. >>> This is intended (WAI) >> Okay. thanks a lot. >> >> It seems I should fix the issue in other code, not the networking. >> >>> What about the NUMA policies of the current thread ? >> We use "numactl -m 0" for memcached, the NUMA policy should allocate >> fclone in >> >> node 0, but we can see many fclones were allocated in node 1. >> >> We have enough memory to allocate these fclones in node 0. >> >>> Has NUMA_NO_NODE behavior changed recently? >> I guess not. >>> What means : "it may creates" ? Please be more specific. >> When we use the memcached for testing in NUMA, there are maybe 20% ~ 30% >> fclones were allocated in >> >> remote NUMA node. > Interesting, how was it measured exactly ? I created a private patch to record the status for each fclone allocation. > Are you using SLUB or SLAB ? I think I use SLUB. (CONFIG_SLUB=y, CONFIG_SLAB_MERGE_DEFAULT=y,CONFIG_SLUB_CPU_PARTIAL=y) Thanks Huang Shijie
On Tue, Feb 20, 2024 at 9:37 AM Shijie Huang <shijie@amperemail.onmicrosoft.com> wrote: > > > 在 2024/2/20 16:17, Eric Dumazet 写道: > > On Tue, Feb 20, 2024 at 7:26 AM Shijie Huang > > <shijie@amperemail.onmicrosoft.com> wrote: > >> > >> 在 2024/2/20 13:32, Eric Dumazet 写道: > >>> On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie > >>> <shijie@os.amperecomputing.com> wrote: > >>>> The current code passes NUMA_NO_NODE to __alloc_skb(), we found > >>>> it may creates fclone SKB in remote NUMA node. > >>> This is intended (WAI) > >> Okay. thanks a lot. > >> > >> It seems I should fix the issue in other code, not the networking. > >> > >>> What about the NUMA policies of the current thread ? > >> We use "numactl -m 0" for memcached, the NUMA policy should allocate > >> fclone in > >> > >> node 0, but we can see many fclones were allocated in node 1. > >> > >> We have enough memory to allocate these fclones in node 0. > >> > >>> Has NUMA_NO_NODE behavior changed recently? > >> I guess not. > >>> What means : "it may creates" ? Please be more specific. > >> When we use the memcached for testing in NUMA, there are maybe 20% ~ 30% > >> fclones were allocated in > >> > >> remote NUMA node. > > Interesting, how was it measured exactly ? > > I created a private patch to record the status for each fclone allocation. > > > > Are you using SLUB or SLAB ? > > I think I use SLUB. (CONFIG_SLUB=y, > CONFIG_SLAB_MERGE_DEFAULT=y,CONFIG_SLUB_CPU_PARTIAL=y) > A similar issue comes from tx_action() calling __napi_kfree_skb() on arbitrary skbs including ones that were allocated on a different NUMA node. This pollutes per-cpu caches with not optimally placed sk_buff :/ Although this should not impact fclones, __napi_kfree_skb() only ? commit 15fad714be86eab13e7568fecaf475b2a9730d3e Author: Jesper Dangaard Brouer <brouer@redhat.com> Date: Mon Feb 8 13:15:04 2016 +0100 net: bulk free SKBs that were delay free'ed due to IRQ context What about : diff --git a/net/core/dev.c b/net/core/dev.c index c588808be77f563c429eb4a2eaee5c8062d99582..63165138c6f690e14520f11e32dc16f2845abad4 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5162,11 +5162,7 @@ static __latent_entropy void net_tx_action(struct softirq_action *h) trace_kfree_skb(skb, net_tx_action, get_kfree_skb_cb(skb)->reason); - if (skb->fclone != SKB_FCLONE_UNAVAILABLE) - __kfree_skb(skb); - else - __napi_kfree_skb(skb, - get_kfree_skb_cb(skb)->reason); + __kfree_skb(skb); } }
From: Huang Shijie <shijie@os.amperecomputing.com> Date: Tue, 20 Feb 2024 10:18:04 +0800 > The current code passes NUMA_NO_NODE to __alloc_skb(), we found > it may creates fclone SKB in remote NUMA node. > > So use numa_node_id() to limit the allocation to current NUMA node. > > Signed-off-by: Huang Shijie <shijie@os.amperecomputing.com> > --- > include/linux/skbuff.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > index 2dde34c29203..ebc42b2604ad 100644 > --- a/include/linux/skbuff.h > +++ b/include/linux/skbuff.h > @@ -1343,7 +1343,7 @@ static inline bool skb_fclone_busy(const struct sock *sk, > static inline struct sk_buff *alloc_skb_fclone(unsigned int size, > gfp_t priority) > { > - return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, NUMA_NO_NODE); > + return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, numa_node_id()); Because it tries to defragment the memory and pick an optimal node. __alloc_skb() and skb clones aren't anyway something very hotpathish, do you have any particular perf numbers and/or usecases where %NUMA_NO_NODE really hurts? > } > > struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src); Thanks, Olek
On 24/02/2024 20.07, Eric Dumazet wrote: > On Tue, Feb 20, 2024 at 9:37 AM Shijie Huang > <shijie@amperemail.onmicrosoft.com> wrote: >> >> >> 在 2024/2/20 16:17, Eric Dumazet 写道: >>> On Tue, Feb 20, 2024 at 7:26 AM Shijie Huang >>> <shijie@amperemail.onmicrosoft.com> wrote: >>>> >>>> 在 2024/2/20 13:32, Eric Dumazet 写道: >>>>> On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie >>>>> <shijie@os.amperecomputing.com> wrote: >>>>>> The current code passes NUMA_NO_NODE to __alloc_skb(), we found >>>>>> it may creates fclone SKB in remote NUMA node. >>>>> This is intended (WAI) >>>> Okay. thanks a lot. >>>> >>>> It seems I should fix the issue in other code, not the networking. >>>> >>>>> What about the NUMA policies of the current thread ? >>>> We use "numactl -m 0" for memcached, the NUMA policy should allocate >>>> fclone in >>>> >>>> node 0, but we can see many fclones were allocated in node 1. >>>> >>>> We have enough memory to allocate these fclones in node 0. >>>> >>>>> Has NUMA_NO_NODE behavior changed recently? >>>> I guess not. >>>>> What means : "it may creates" ? Please be more specific. >>>> When we use the memcached for testing in NUMA, there are maybe 20% ~ 30% >>>> fclones were allocated in >>>> >>>> remote NUMA node. >>> Interesting, how was it measured exactly ? >> >> I created a private patch to record the status for each fclone allocation. >> >> >>> Are you using SLUB or SLAB ? >> >> I think I use SLUB. (CONFIG_SLUB=y, >> CONFIG_SLAB_MERGE_DEFAULT=y,CONFIG_SLUB_CPU_PARTIAL=y) >> > > A similar issue comes from tx_action() calling __napi_kfree_skb() on > arbitrary skbs > including ones that were allocated on a different NUMA node. > > This pollutes per-cpu caches with not optimally placed sk_buff :/ > > Although this should not impact fclones, __napi_kfree_skb() only ? > > commit 15fad714be86eab13e7568fecaf475b2a9730d3e > Author: Jesper Dangaard Brouer <brouer@redhat.com> > Date: Mon Feb 8 13:15:04 2016 +0100 > > net: bulk free SKBs that were delay free'ed due to IRQ context > > What about : > > diff --git a/net/core/dev.c b/net/core/dev.c > index c588808be77f563c429eb4a2eaee5c8062d99582..63165138c6f690e14520f11e32dc16f2845abad4 > 100644 > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -5162,11 +5162,7 @@ static __latent_entropy void > net_tx_action(struct softirq_action *h) > trace_kfree_skb(skb, net_tx_action, > get_kfree_skb_cb(skb)->reason); > > - if (skb->fclone != SKB_FCLONE_UNAVAILABLE) > - __kfree_skb(skb); > - else > - __napi_kfree_skb(skb, > - get_kfree_skb_cb(skb)->reason); Yes, I think it makes sense to avoid calling __napi_kfree_skb here. The __napi_kfree_skb call will cache SKB slub-allocation (but "release" data) on a per CPU napi_alloc_cache (see code napi_skb_cache_put()). In net_tx_action() there is a chance this could originate from another CPU or even NUMA node. I notice this is only for SKBs on the softnet_data->completion_queue, which have a high chance of being cache cold. My patch 15fad714be86e only made sense when we bulk freed these SKBs, but after Olek's changes to cache freed SKBs, then this shouldn't be calling __napi_kfree_skb() (previously named __kfree_skb_defer). I support this RFC patch from Eric. Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> > + __kfree_skb(skb); > } > }
On Mon, Feb 26, 2024 at 11:18 AM Jesper Dangaard Brouer <hawk@kernel.org> wrote: > > > > On 24/02/2024 20.07, Eric Dumazet wrote: > > On Tue, Feb 20, 2024 at 9:37 AM Shijie Huang > > <shijie@amperemail.onmicrosoft.com> wrote: > >> > >> > >> 在 2024/2/20 16:17, Eric Dumazet 写道: > >>> On Tue, Feb 20, 2024 at 7:26 AM Shijie Huang > >>> <shijie@amperemail.onmicrosoft.com> wrote: > >>>> > >>>> 在 2024/2/20 13:32, Eric Dumazet 写道: > >>>>> On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie > >>>>> <shijie@os.amperecomputing.com> wrote: > >>>>>> The current code passes NUMA_NO_NODE to __alloc_skb(), we found > >>>>>> it may creates fclone SKB in remote NUMA node. > >>>>> This is intended (WAI) > >>>> Okay. thanks a lot. > >>>> > >>>> It seems I should fix the issue in other code, not the networking. > >>>> > >>>>> What about the NUMA policies of the current thread ? > >>>> We use "numactl -m 0" for memcached, the NUMA policy should allocate > >>>> fclone in > >>>> > >>>> node 0, but we can see many fclones were allocated in node 1. > >>>> > >>>> We have enough memory to allocate these fclones in node 0. > >>>> > >>>>> Has NUMA_NO_NODE behavior changed recently? > >>>> I guess not. > >>>>> What means : "it may creates" ? Please be more specific. > >>>> When we use the memcached for testing in NUMA, there are maybe 20% ~ 30% > >>>> fclones were allocated in > >>>> > >>>> remote NUMA node. > >>> Interesting, how was it measured exactly ? > >> > >> I created a private patch to record the status for each fclone allocation. > >> > >> > >>> Are you using SLUB or SLAB ? > >> > >> I think I use SLUB. (CONFIG_SLUB=y, > >> CONFIG_SLAB_MERGE_DEFAULT=y,CONFIG_SLUB_CPU_PARTIAL=y) > >> > > > > A similar issue comes from tx_action() calling __napi_kfree_skb() on > > arbitrary skbs > > including ones that were allocated on a different NUMA node. > > > > This pollutes per-cpu caches with not optimally placed sk_buff :/ > > > > Although this should not impact fclones, __napi_kfree_skb() only ? > > > > commit 15fad714be86eab13e7568fecaf475b2a9730d3e > > Author: Jesper Dangaard Brouer <brouer@redhat.com> > > Date: Mon Feb 8 13:15:04 2016 +0100 > > > > net: bulk free SKBs that were delay free'ed due to IRQ context > > > > What about : > > > > diff --git a/net/core/dev.c b/net/core/dev.c > > index c588808be77f563c429eb4a2eaee5c8062d99582..63165138c6f690e14520f11e32dc16f2845abad4 > > 100644 > > --- a/net/core/dev.c > > +++ b/net/core/dev.c > > @@ -5162,11 +5162,7 @@ static __latent_entropy void > > net_tx_action(struct softirq_action *h) > > trace_kfree_skb(skb, net_tx_action, > > get_kfree_skb_cb(skb)->reason); > > > > - if (skb->fclone != SKB_FCLONE_UNAVAILABLE) > > - __kfree_skb(skb); > > - else > > - __napi_kfree_skb(skb, > > - get_kfree_skb_cb(skb)->reason); > > Yes, I think it makes sense to avoid calling __napi_kfree_skb here. > The __napi_kfree_skb call will cache SKB slub-allocation (but "release" > data) on a per CPU napi_alloc_cache (see code napi_skb_cache_put()). > In net_tx_action() there is a chance this could originate from another > CPU or even NUMA node. I notice this is only for SKBs on the > softnet_data->completion_queue, which have a high chance of being cache > cold. My patch 15fad714be86e only made sense when we bulk freed these > SKBs, but after Olek's changes to cache freed SKBs, then this shouldn't > be calling __napi_kfree_skb() (previously named __kfree_skb_defer). > > I support this RFC patch from Eric. > > Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Note that this should not matter for most NIC, because their drivers perform TX completion from NAPI context, we do not hit this path. It seems that switching to SLUB instead of SLAB has increased the chances of getting memory from another node. We probably need to investigate.
在 2024/2/26 18:10, Alexander Lobakin 写道: > __alloc_skb() and skb clones aren't anyway something very hotpathish, do > you have any particular perf numbers and/or usecases where %NUMA_NO_NODE > really hurts? From the memcached test, I do not see really performance hurts. Thanks Huang Shijie
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 2dde34c29203..ebc42b2604ad 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1343,7 +1343,7 @@ static inline bool skb_fclone_busy(const struct sock *sk, static inline struct sk_buff *alloc_skb_fclone(unsigned int size, gfp_t priority) { - return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, NUMA_NO_NODE); + return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, numa_node_id()); } struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src);