Message ID | 20230320170009.GA27961@debian |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp1341064wrt; Mon, 20 Mar 2023 10:32:36 -0700 (PDT) X-Google-Smtp-Source: AK7set+/EtQVk5+PiX7Uwz4mka/esEPBcVlR7buLuVO0lejhu5rK/3aIxBjO4iI6PlqmnNfzxBiJ X-Received: by 2002:a17:902:c781:b0:19c:be0c:738 with SMTP id w1-20020a170902c78100b0019cbe0c0738mr14465139pla.59.1679333556516; Mon, 20 Mar 2023 10:32:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679333556; cv=none; d=google.com; s=arc-20160816; b=UYGDegp4OsCTtHYICmPAf2zcoEHnuo3xizXK84ca1tnIBIOFABG+a/5Isu0tF6l7zY YYCDjDiBNWNmcOwgJPLhffa5+ix6KCpBS/MjrWQJoM6H0WQxwu7N9P9azYY4Jn4ShA19 LbQgO/J4Rvbt1mG6pjjKx3G0kMmgY01+1cDdx6TJbx7KFCN6RBuVFqwO6rGnaKn8h2Ds 9BhdNxkTWGdEiJUwOT9z6wZ4es20vecwAY+I3UGsT5ajgX5SjuuG8Mvc1kaZkRiwHViE LknorNfvZ/FJTRq+bciR3/ryBpCrvzR4m3yuO5y8BvARUcJq5V2aaTkuESfCokaM1fdx bIfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:to:from:date :dkim-signature; bh=emunZ6Gfjz6hsGVJY+MFFdb08z3ajudd3PGryRdXWC8=; b=UZCIbpvv0wbh5WN7zDj8fReype8jzaVBLNj7tS4Ib6dot/3pUWeoIoPL/ZK1rF+57w kgEtgVxdJ+BReKaVQ7+HQie184ZOnmrFMEsoqScvivYCcxOIlcX7bsVW2tzuK5kda9yO OxvxUyjvOb9n9D0QQTXPgz77fNP3kuektV0It/y+J9LxChro5/0iqCy9C8mnCg0UsWzT fRrDQ6Irny38kq2WZjw9NrPq+LMuPQ++GPzaqF2wuwblh6fUqZVqh2Fr8w2poQXCcrRX Ng1mXwUniOGQOJdXCQCfzY//UEC9WYXZtvSESopIfmv7yYP9AsQmvCx/tChRprKu7DXz r6AA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Uwtux0c9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o12-20020a1709026b0c00b0019e4124a43asi10719709plk.61.2023.03.20.10.32.24; Mon, 20 Mar 2023 10:32:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Uwtux0c9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233390AbjCTRHX (ORCPT <rfc822;pusanteemu@gmail.com> + 99 others); Mon, 20 Mar 2023 13:07:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233178AbjCTRGj (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 20 Mar 2023 13:06:39 -0400 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D9973847D; Mon, 20 Mar 2023 10:01:40 -0700 (PDT) Received: by mail-wm1-x32e.google.com with SMTP id i5-20020a05600c354500b003edd24054e0so2854194wmq.4; Mon, 20 Mar 2023 10:01:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679331627; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:to:from:date:from:to:cc:subject:date:message-id :reply-to; bh=emunZ6Gfjz6hsGVJY+MFFdb08z3ajudd3PGryRdXWC8=; b=Uwtux0c9h9pZ2vV/Yw7YhOJueutyMdrBwEHhOFGu2ij/Z0qc8KnMZctQU8ixDyL76b 5WgvdnzoMPfDCibT0av4gBeRKEZD+6aEywFVALM2U23HybKK2TbfxwmQUdCLd1dv9f0P 5W+QwyLXyXdyT+M6U8FXu/qFuQXSVdVY+gQNP279JRQy4DHHtc8AaFd/Lvt3vN4vVsN2 +Gway3YwHvveAtYDCNKxlRHse/YMXWIKu0vlV/y7hQqpHavwZX9w1GDUGyvK+R4k4Opw NMMjfUEEr4NVsqh9i3tiM0ihoz9limlMXnlLaE5D4f3bEYe9F/APKyvRW/Dopi2nVmra tMEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679331627; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=emunZ6Gfjz6hsGVJY+MFFdb08z3ajudd3PGryRdXWC8=; b=8SKYt3AMNEIS4ki7BPgCgnZNUfhc3VgkRRIVYJfmhEDqvDY4sM8JQ8GPBsReEO8Ses CKUU3tPLB6eywicsNDyUWIdC7Z/97XxpuVtr8/ZO6fNnOD96LtCf9ViCgfiuw7/Qem3k gpVOUJgiCGd8k+qnplp+RrIdCdI5nsOSf2F+hRX55UiWtWv+txcxsEJaYG6rEl5mXBoj KzcJjAk8JffwyB2QthlNHVt/WzgzdGcMv4oslMxeH/JTfNvfw1e8ohCmTJVlZ2iYi7ke 4APrlkHNwIn+qOKENzPiSejWbeTmP9dBr7plvk58Ynx664rFqOQ1vnfNI1nLSQWFtre2 cpHg== X-Gm-Message-State: AO0yUKW/JauFgI3piNUeCEH0bZDRrwY79E6adUFNnuyGC6zrpgwQApJo /A7h8cjlwHVdVJJ889yyqqs= X-Received: by 2002:a05:600c:2202:b0:3ed:9b20:c7c1 with SMTP id z2-20020a05600c220200b003ed9b20c7c1mr189663wml.20.1679331626994; Mon, 20 Mar 2023 10:00:26 -0700 (PDT) Received: from debian ([89.238.191.199]) by smtp.gmail.com with ESMTPSA id p9-20020a1c5449000000b003dc1d668866sm16970352wmi.10.2023.03.20.10.00.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 10:00:26 -0700 (PDT) Date: Mon, 20 Mar 2023 18:00:11 +0100 From: Richard Gobert <richardbgobert@gmail.com> To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, dsahern@kernel.org, alexanderduyck@fb.com, lucien.xin@gmail.com, lixiaoyan@google.com, iwienand@redhat.com, leon@kernel.org, ye.xingchen@zte.com.cn, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 2/2] gro: optimise redundant parsing of packets Message-ID: <20230320170009.GA27961@debian> References: <20230320163703.GA27712@debian> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230320163703.GA27712@debian> User-Agent: Mutt/1.10.1 (2018-07-13) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760907408825839804?= X-GMAIL-MSGID: =?utf-8?q?1760908863250941390?= |
Series |
gro: optimise redundant parsing of packets
|
|
Commit Message
Richard Gobert
March 20, 2023, 5 p.m. UTC
Currently the IPv6 extension headers are parsed twice: first in
ipv6_gro_receive, and then again in ipv6_gro_complete.
By using the new ->transport_proto field, and also storing the size of the
network header, we can avoid parsing extension headers a second time in
ipv6_gro_complete (which saves multiple memory dereferences and conditional
checks inside ipv6_exthdrs_len for a varying amount of extension headers in
IPv6 packets).
The implementation had to handle both inner and outer layers in case of
encapsulation (as they can't use the same field). I've applied a similar
optimisation to Ethernet.
Performance tests for TCP stream over IPv6 with a varying amount of
extension headers demonstrate throughput improvement of ~0.7%.
Signed-off-by: Richard Gobert <richardbgobert@gmail.com>
---
v3 -> v4:
- Updated commit msg as Eric suggested.
- No code changes.
---
include/net/gro.h | 9 +++++++++
net/ethernet/eth.c | 14 +++++++++++---
net/ipv6/ip6_offload.c | 20 +++++++++++++++-----
3 files changed, 35 insertions(+), 8 deletions(-)
Comments
On Mon, 2023-03-20 at 18:00 +0100, Richard Gobert wrote: > Currently the IPv6 extension headers are parsed twice: first in > ipv6_gro_receive, and then again in ipv6_gro_complete. > > By using the new ->transport_proto field, and also storing the size of the > network header, we can avoid parsing extension headers a second time in > ipv6_gro_complete (which saves multiple memory dereferences and conditional > checks inside ipv6_exthdrs_len for a varying amount of extension headers in > IPv6 packets). > > The implementation had to handle both inner and outer layers in case of > encapsulation (as they can't use the same field). I've applied a similar > optimisation to Ethernet. > > Performance tests for TCP stream over IPv6 with a varying amount of > extension headers demonstrate throughput improvement of ~0.7%. I'm surprised that the improvement is measurable: for large aggregate packets a single ipv6_exthdrs_len() call is avoided out of tens calls for the individual pkts. Additionally such figure is comparable to noise level in my tests. This adds a couple of additional branches for the common (no extensions header) case. while patch 1/2 could be useful, patch 2/2 overall looks not worthy to me. I suggest to re-post for inclusion only patch 1, unless others have strong different opinions. Cheers, Paolo
On Wed, Mar 22, 2023 at 2:59 AM Paolo Abeni <pabeni@redhat.com> wrote: > > On Mon, 2023-03-20 at 18:00 +0100, Richard Gobert wrote: > > Currently the IPv6 extension headers are parsed twice: first in > > ipv6_gro_receive, and then again in ipv6_gro_complete. > > > > By using the new ->transport_proto field, and also storing the size of the > > network header, we can avoid parsing extension headers a second time in > > ipv6_gro_complete (which saves multiple memory dereferences and conditional > > checks inside ipv6_exthdrs_len for a varying amount of extension headers in > > IPv6 packets). > > > > The implementation had to handle both inner and outer layers in case of > > encapsulation (as they can't use the same field). I've applied a similar > > optimisation to Ethernet. > > > > Performance tests for TCP stream over IPv6 with a varying amount of > > extension headers demonstrate throughput improvement of ~0.7%. > > I'm surprised that the improvement is measurable: for large aggregate > packets a single ipv6_exthdrs_len() call is avoided out of tens calls > for the individual pkts. Additionally such figure is comparable to > noise level in my tests. > > This adds a couple of additional branches for the common (no extensions > header) case. > > while patch 1/2 could be useful, patch 2/2 overall looks not worthy to > me. > > I suggest to re-post for inclusion only patch 1, unless others have > strong different opinions. > +2 I have the same feeling/opinion. > Cheers, > > Paolo >
> On Wed, Mar 22, 2023 at 2:59 AM Paolo Abeni <pabeni@redhat.com> wrote: > > > > On Mon, 2023-03-20 at 18:00 +0100, Richard Gobert wrote: > > > Currently the IPv6 extension headers are parsed twice: first in > > > ipv6_gro_receive, and then again in ipv6_gro_complete. > > > > > > By using the new ->transport_proto field, and also storing the size of the > > > network header, we can avoid parsing extension headers a second time in > > > ipv6_gro_complete (which saves multiple memory dereferences and conditional > > > checks inside ipv6_exthdrs_len for a varying amount of extension headers in > > > IPv6 packets). > > > > > > The implementation had to handle both inner and outer layers in case of > > > encapsulation (as they can't use the same field). I've applied a similar > > > optimisation to Ethernet. > > > > > > Performance tests for TCP stream over IPv6 with a varying amount of > > > extension headers demonstrate throughput improvement of ~0.7%. > > > > I'm surprised that the improvement is measurable: for large aggregate > > packets a single ipv6_exthdrs_len() call is avoided out of tens calls > > for the individual pkts. Additionally such figure is comparable to > > noise level in my tests. It's not simple but I made an effort to make a quiet environment. Correct configuration allows for this kind of measurements to be made as the test is CPU bound and noise is a variance that can be reduced with enough samples. Environment example: (100Gbit NIC (mlx5), physical machine, i9 12th gen) # power-management and hyperthreading disabled in BIOS # sysctl preallocate net mem echo 0 > /sys/devices/system/cpu/cpufreq/boost # disable turboboost ethtool -A enp1s0f0np0 rx off tx off autoneg off # no PAUSE frames # Single core performance for x in /sys/devices/system/cpu/cpu[1-9]*/online; do echo 0 >"$x"; done ./network-testing-master/bin/netfilter_unload_modules.sh 2>/dev/null # unload netfilter tuned-adm profile latency-performance cpupower frequency-set -f 2200MHz # Set core to specific frequency systemctl isolate rescue-ssh.target # and kill all processes besides init > > This adds a couple of additional branches for the common (no extensions > > header) case. The additional branch in ipv6_gro_receive would be negligible or even non-existent for a branch predictor in the common case (non-encapsulated packets). I could wrap it with a likely macro if you wish. Inside ipv6_gro_complete a couple of branches are saved for the common case as demonstrated below. original code ipv6_gro_complete (ipv6_exthdrs_len is inlined): // if (skb->encapsulation) ffffffff81c4962b: f6 87 81 00 00 00 20 testb $0x20,0x81(%rdi) ffffffff81c49632: 74 2a je ffffffff81c4965e <ipv6_gro_complete+0x3e> ... // nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); ffffffff81c4969c: eb 1b jmp ffffffff81c496b9 <ipv6_gro_complete+0x99> <-- jump to beginning of for loop ffffffff81c4968e: b8 28 00 00 00 mov $0x28,%eax ffffffff81c49693: 31 f6 xor %esi,%esi ffffffff81c49695: 48 c7 c7 c0 28 aa 82 mov $0xffffffff82aa28c0,%rdi ffffffff81c4969c: eb 1b jmp ffffffff81c496b9 <ipv6_gro_complete+0x99> ffffffff81c4969e: f6 41 18 01 testb $0x1,0x18(%rcx) ffffffff81c496a2: 74 34 je ffffffff81c496d8 <ipv6_gro_complete+0xb8> <--- 3rd conditional check: !((*opps)->flags & INET6_PROTO_GSO_EXTHDR) ffffffff81c496a4: 48 98 cltq ffffffff81c496a6: 48 01 c2 add %rax,%rdx ffffffff81c496a9: 0f b6 42 01 movzbl 0x1(%rdx),%eax ffffffff81c496ad: 0f b6 0a movzbl (%rdx),%ecx ffffffff81c496b0: 8d 04 c5 08 00 00 00 lea 0x8(,%rax,8),%eax ffffffff81c496b7: 01 c6 add %eax,%esi ffffffff81c496b9: 85 c9 test %ecx,%ecx <--- for loop starts here ffffffff81c496bb: 74 e7 je ffffffff81c496a4 <ipv6_gro_complete+0x84> <--- 1st conditional check: proto != NEXTHDR_HOP ffffffff81c496bd: 48 8b 0c cf mov (%rdi,%rcx,8),%rcx ffffffff81c496c1: 48 85 c9 test %rcx,%rcx ffffffff81c496c4: 75 d8 jne ffffffff81c4969e <ipv6_gro_complete+0x7e> <--- 2nd conditional check: unlikely(!(*opps)) ... (indirect call ops->callbacks.gro_complete) ipv6_exthdrs_len contains a loop which has 3 conditional checks. For the common (no extensions header) case, in the new code, *all 3 branches are completely avoided* patched code ipv6_gro_complete: // if (skb->encapsulation) ffffffff81befe58: f6 83 81 00 00 00 20 testb $0x20,0x81(%rbx) ffffffff81befe5f: 74 78 je ffffffff81befed9 <ipv6_gro_complete+0xb9> ... // else ffffffff81befed9: 0f b6 43 50 movzbl 0x50(%rbx),%eax ffffffff81befedd: 0f b7 73 4c movzwl 0x4c(%rbx),%esi ffffffff81befee1: 48 8b 0c c5 c0 3f a9 mov -0x7d56c040(,%rax,8),%rcx ... (indirect call ops->callbacks.gro_complete) Thus, the patch is beneficial for both the common case and the ext hdr case. I would appreciate a second consideration :) > > while patch 1/2 could be useful, patch 2/2 overall looks not worthy to > > me. > > > > I suggest to re-post for inclusion only patch 1, unless others have > > strong different opinions. > > > > +2 > > I have the same feeling/opinion. > > > Cheers, > > > > Paolo > >
On Wed, 2023-03-22 at 20:33 +0100, Richard Gobert wrote: > > On Wed, Mar 22, 2023 at 2:59 AM Paolo Abeni <pabeni@redhat.com> > > wrote: > > > > > > On Mon, 2023-03-20 at 18:00 +0100, Richard Gobert wrote: > > > > Currently the IPv6 extension headers are parsed twice: first in > > > > ipv6_gro_receive, and then again in ipv6_gro_complete. > > > > > > > > By using the new ->transport_proto field, and also storing the > > > > size of the > > > > network header, we can avoid parsing extension headers a second > > > > time in > > > > ipv6_gro_complete (which saves multiple memory dereferences and > > > > conditional > > > > checks inside ipv6_exthdrs_len for a varying amount of > > > > extension headers in > > > > IPv6 packets). > > > > > > > > The implementation had to handle both inner and outer layers in > > > > case of > > > > encapsulation (as they can't use the same field). I've applied > > > > a similar > > > > optimisation to Ethernet. > > > > > > > > Performance tests for TCP stream over IPv6 with a varying > > > > amount of > > > > extension headers demonstrate throughput improvement of ~0.7%. > > > > > > I'm surprised that the improvement is measurable: for large > > > aggregate > > > packets a single ipv6_exthdrs_len() call is avoided out of tens > > > calls > > > for the individual pkts. Additionally such figure is comparable > > > to > > > noise level in my tests. > > It's not simple but I made an effort to make a quiet environment. > Correct configuration allows for this kind of measurements to be made > as the test is CPU bound and noise is a variance that can be reduced > with > enough samples. > > Environment example: (100Gbit NIC (mlx5), physical machine, i9 12th > gen) > > # power-management and hyperthreading disabled in BIOS > # sysctl preallocate net mem > echo 0 > /sys/devices/system/cpu/cpufreq/boost # disable > turboboost > ethtool -A enp1s0f0np0 rx off tx off autoneg off # no PAUSE > frames > > # Single core performance > for x in /sys/devices/system/cpu/cpu[1-9]*/online; do echo 0 > >"$x"; done > > ./network-testing-master/bin/netfilter_unload_modules.sh > 2>/dev/null # unload netfilter > tuned-adm profile latency-performance > cpupower frequency-set -f 2200MHz # Set core to specific > frequency > systemctl isolate rescue-ssh.target > # and kill all processes besides init > > > > This adds a couple of additional branches for the common (no > > > extensions > > > header) case. > > The additional branch in ipv6_gro_receive would be negligible or even > non-existent for a branch predictor in the common case > (non-encapsulated packets). > I could wrap it with a likely macro if you wish. > Inside ipv6_gro_complete a couple of branches are saved for the > common > case as demonstrated below. > > original code ipv6_gro_complete (ipv6_exthdrs_len is inlined): > > // if (skb->encapsulation) > > ffffffff81c4962b: f6 87 81 00 00 00 20 testb > $0x20,0x81(%rdi) > ffffffff81c49632: 74 2a je > ffffffff81c4965e <ipv6_gro_complete+0x3e> > > ... > > // nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); > > ffffffff81c4969c: eb 1b jmp > ffffffff81c496b9 <ipv6_gro_complete+0x99> <-- jump to beginning of > for loop > ffffffff81c4968e: b8 28 00 00 00 mov $0x28,%eax > ffffffff81c49693: 31 f6 xor %esi,%esi > ffffffff81c49695: 48 c7 c7 c0 28 aa 82 mov > $0xffffffff82aa28c0,%rdi > ffffffff81c4969c: eb 1b jmp > ffffffff81c496b9 <ipv6_gro_complete+0x99> > ffffffff81c4969e: f6 41 18 01 testb > $0x1,0x18(%rcx) > ffffffff81c496a2: 74 34 je > ffffffff81c496d8 <ipv6_gro_complete+0xb8> <--- 3rd conditional > check: !((*opps)->flags & INET6_PROTO_GSO_EXTHDR) > ffffffff81c496a4: 48 98 cltq > ffffffff81c496a6: 48 01 c2 add %rax,%rdx > ffffffff81c496a9: 0f b6 42 01 movzbl 0x1(%rdx),%eax > ffffffff81c496ad: 0f b6 0a movzbl (%rdx),%ecx > ffffffff81c496b0: 8d 04 c5 08 00 00 00 lea > 0x8(,%rax,8),%eax > ffffffff81c496b7: 01 c6 add %eax,%esi > ffffffff81c496b9: 85 c9 test %ecx,%ecx > <--- for loop starts here > ffffffff81c496bb: 74 e7 je > ffffffff81c496a4 <ipv6_gro_complete+0x84> <--- 1st conditional > check: proto != NEXTHDR_HOP > ffffffff81c496bd: 48 8b 0c cf mov > (%rdi,%rcx,8),%rcx > ffffffff81c496c1: 48 85 c9 test %rcx,%rcx > ffffffff81c496c4: 75 d8 jne > ffffffff81c4969e <ipv6_gro_complete+0x7e> <--- 2nd conditional > check: unlikely(!(*opps)) > > ... (indirect call ops->callbacks.gro_complete) > > ipv6_exthdrs_len contains a loop which has 3 conditional checks. > For the common (no extensions header) case, in the new code, *all 3 > branches are completely avoided* > > patched code ipv6_gro_complete: > > // if (skb->encapsulation) > ffffffff81befe58: f6 83 81 00 00 00 20 testb > $0x20,0x81(%rbx) > ffffffff81befe5f: 74 78 je > ffffffff81befed9 <ipv6_gro_complete+0xb9> > > ... > > // else > ffffffff81befed9: 0f b6 43 50 movzbl > 0x50(%rbx),%eax > ffffffff81befedd: 0f b7 73 4c movzwl > 0x4c(%rbx),%esi > ffffffff81befee1: 48 8b 0c c5 c0 3f a9 mov - > 0x7d56c040(,%rax,8),%rcx > > ... (indirect call ops->callbacks.gro_complete) > > Thus, the patch is beneficial for both the common case and the ext > hdr > case. I would appreciate a second consideration :) A problem with the above analysis is that it does not take in consideration the places where the new branch are added: eth_gro_receive() and ipv6_gro_receive(). Note that such functions are called for each packet on the wire: multiple times for each aggregate packets. The above is likely not measurable in terms on pps delta, but the added CPU cycles spent for the common case are definitely there. In my opinion that outlast the benefit for the extensions header case. Cheers, Paolo p.s. please refrain from off-list ping. That is ignored by most and considered rude by some.
> On Wed, 2023-03-22 at 20:33 +0100, Richard Gobert wrote: > > > On Wed, Mar 22, 2023 at 2:59 AM Paolo Abeni <pabeni@redhat.com> > > > wrote: > > > > > > > > On Mon, 2023-03-20 at 18:00 +0100, Richard Gobert wrote: > > > > > Currently the IPv6 extension headers are parsed twice: first in > > > > > ipv6_gro_receive, and then again in ipv6_gro_complete. > > > > > > > > > > By using the new ->transport_proto field, and also storing the > > > > > size of the > > > > > network header, we can avoid parsing extension headers a second > > > > > time in > > > > > ipv6_gro_complete (which saves multiple memory dereferences and > > > > > conditional > > > > > checks inside ipv6_exthdrs_len for a varying amount of > > > > > extension headers in > > > > > IPv6 packets). > > > > > > > > > > The implementation had to handle both inner and outer layers in > > > > > case of > > > > > encapsulation (as they can't use the same field). I've applied > > > > > a similar > > > > > optimisation to Ethernet. > > > > > > > > > > Performance tests for TCP stream over IPv6 with a varying > > > > > amount of > > > > > extension headers demonstrate throughput improvement of ~0.7%. > > > > > > > > I'm surprised that the improvement is measurable: for large > > > > aggregate > > > > packets a single ipv6_exthdrs_len() call is avoided out of tens > > > > calls > > > > for the individual pkts. Additionally such figure is comparable > > > > to > > > > noise level in my tests. > > > > It's not simple but I made an effort to make a quiet environment. > > Correct configuration allows for this kind of measurements to be made > > as the test is CPU bound and noise is a variance that can be reduced > > with > > enough samples. > > > > Environment example: (100Gbit NIC (mlx5), physical machine, i9 12th > > gen) > > > > # power-management and hyperthreading disabled in BIOS > > # sysctl preallocate net mem > > echo 0 > /sys/devices/system/cpu/cpufreq/boost # disable > > turboboost > > ethtool -A enp1s0f0np0 rx off tx off autoneg off # no PAUSE > > frames > > > > # Single core performance > > for x in /sys/devices/system/cpu/cpu[1-9]*/online; do echo 0 > > >"$x"; done > > > > ./network-testing-master/bin/netfilter_unload_modules.sh > > 2>/dev/null # unload netfilter > > tuned-adm profile latency-performance > > cpupower frequency-set -f 2200MHz # Set core to specific > > frequency > > systemctl isolate rescue-ssh.target > > # and kill all processes besides init > > > > > > This adds a couple of additional branches for the common (no > > > > extensions > > > > header) case. > > > > The additional branch in ipv6_gro_receive would be negligible or even > > non-existent for a branch predictor in the common case > > (non-encapsulated packets). > > I could wrap it with a likely macro if you wish. > > Inside ipv6_gro_complete a couple of branches are saved for the > > common > > case as demonstrated below. > > > > original code ipv6_gro_complete (ipv6_exthdrs_len is inlined): > > > > // if (skb->encapsulation) > > > > ffffffff81c4962b: f6 87 81 00 00 00 20 testb > > $0x20,0x81(%rdi) > > ffffffff81c49632: 74 2a je > > ffffffff81c4965e <ipv6_gro_complete+0x3e> > > > > ... > > > > // nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); > > > > ffffffff81c4969c: eb 1b jmp > > ffffffff81c496b9 <ipv6_gro_complete+0x99> <-- jump to beginning of > > for loop > > ffffffff81c4968e: b8 28 00 00 00 mov $0x28,%eax > > ffffffff81c49693: 31 f6 xor %esi,%esi > > ffffffff81c49695: 48 c7 c7 c0 28 aa 82 mov > > $0xffffffff82aa28c0,%rdi > > ffffffff81c4969c: eb 1b jmp > > ffffffff81c496b9 <ipv6_gro_complete+0x99> > > ffffffff81c4969e: f6 41 18 01 testb > > $0x1,0x18(%rcx) > > ffffffff81c496a2: 74 34 je > > ffffffff81c496d8 <ipv6_gro_complete+0xb8> <--- 3rd conditional > > check: !((*opps)->flags & INET6_PROTO_GSO_EXTHDR) > > ffffffff81c496a4: 48 98 cltq > > ffffffff81c496a6: 48 01 c2 add %rax,%rdx > > ffffffff81c496a9: 0f b6 42 01 movzbl 0x1(%rdx),%eax > > ffffffff81c496ad: 0f b6 0a movzbl (%rdx),%ecx > > ffffffff81c496b0: 8d 04 c5 08 00 00 00 lea > > 0x8(,%rax,8),%eax > > ffffffff81c496b7: 01 c6 add %eax,%esi > > ffffffff81c496b9: 85 c9 test %ecx,%ecx > > <--- for loop starts here > > ffffffff81c496bb: 74 e7 je > > ffffffff81c496a4 <ipv6_gro_complete+0x84> <--- 1st conditional > > check: proto != NEXTHDR_HOP > > ffffffff81c496bd: 48 8b 0c cf mov > > (%rdi,%rcx,8),%rcx > > ffffffff81c496c1: 48 85 c9 test %rcx,%rcx > > ffffffff81c496c4: 75 d8 jne > > ffffffff81c4969e <ipv6_gro_complete+0x7e> <--- 2nd conditional > > check: unlikely(!(*opps)) > > > > ... (indirect call ops->callbacks.gro_complete) > > > > ipv6_exthdrs_len contains a loop which has 3 conditional checks. > > For the common (no extensions header) case, in the new code, *all 3 > > branches are completely avoided* > > > > patched code ipv6_gro_complete: > > > > // if (skb->encapsulation) > > ffffffff81befe58: f6 83 81 00 00 00 20 testb > > $0x20,0x81(%rbx) > > ffffffff81befe5f: 74 78 je > > ffffffff81befed9 <ipv6_gro_complete+0xb9> > > > > ... > > > > // else > > ffffffff81befed9: 0f b6 43 50 movzbl > > 0x50(%rbx),%eax > > ffffffff81befedd: 0f b7 73 4c movzwl > > 0x4c(%rbx),%esi > > ffffffff81befee1: 48 8b 0c c5 c0 3f a9 mov - > > 0x7d56c040(,%rax,8),%rcx > > > > ... (indirect call ops->callbacks.gro_complete) > > > > Thus, the patch is beneficial for both the common case and the ext > > hdr > > case. I would appreciate a second consideration :) > > A problem with the above analysis is that it does not take in > consideration the places where the new branch are added: > eth_gro_receive() and ipv6_gro_receive(). > > Note that such functions are called for each packet on the wire: > multiple times for each aggregate packets. > > The above is likely not measurable in terms on pps delta, but the added > CPU cycles spent for the common case are definitely there. In my > opinion that outlast the benefit for the extensions header case. > > Cheers, > > Paolo > > p.s. please refrain from off-list ping. That is ignored by most and > considered rude by some. Thanks, I will re-post the first patch as a new one. As for the second patch, I get your point, you are correct. I didn't pay enough attention to the accumulated overhead during the receive phase, as it wasn't showing up in my measurements. I'll look further into it, and check if I can come up with a better solution. Sorry for the off-list ping, is it ok to send a ping via the mailing list?
diff --git a/include/net/gro.h b/include/net/gro.h index 7b47dd6ce94f..35f60ea99f6c 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -86,6 +86,15 @@ struct napi_gro_cb { /* used to support CHECKSUM_COMPLETE for tunneling protocols */ __wsum csum; + + /* Used in ipv6_gro_receive() */ + u16 network_len; + + /* Used in eth_gro_receive() */ + __be16 network_proto; + + /* Used in ipv6_gro_receive() */ + u8 transport_proto; }; #define NAPI_GRO_CB(skb) ((struct napi_gro_cb *)(skb)->cb) diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c index 2edc8b796a4e..c2b77d9401e4 100644 --- a/net/ethernet/eth.c +++ b/net/ethernet/eth.c @@ -439,6 +439,9 @@ struct sk_buff *eth_gro_receive(struct list_head *head, struct sk_buff *skb) goto out; } + if (!NAPI_GRO_CB(skb)->encap_mark) + NAPI_GRO_CB(skb)->network_proto = type; + skb_gro_pull(skb, sizeof(*eh)); skb_gro_postpull_rcsum(skb, eh, sizeof(*eh)); @@ -455,13 +458,18 @@ EXPORT_SYMBOL(eth_gro_receive); int eth_gro_complete(struct sk_buff *skb, int nhoff) { - struct ethhdr *eh = (struct ethhdr *)(skb->data + nhoff); - __be16 type = eh->h_proto; struct packet_offload *ptype; + struct ethhdr *eh; int err = -ENOSYS; + __be16 type; - if (skb->encapsulation) + if (skb->encapsulation) { + eh = (struct ethhdr *)(skb->data + nhoff); skb_set_inner_mac_header(skb, nhoff); + type = eh->h_proto; + } else { + type = NAPI_GRO_CB(skb)->network_proto; + } ptype = gro_find_complete_by_type(type); if (ptype != NULL) diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c index 00dc2e3b0184..6e3a923ad573 100644 --- a/net/ipv6/ip6_offload.c +++ b/net/ipv6/ip6_offload.c @@ -232,6 +232,11 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, flush--; nlen = skb_network_header_len(skb); + if (!NAPI_GRO_CB(skb)->encap_mark) { + NAPI_GRO_CB(skb)->transport_proto = proto; + NAPI_GRO_CB(skb)->network_len = nlen; + } + list_for_each_entry(p, head, list) { const struct ipv6hdr *iph2; __be32 first_word; /* <Version:4><Traffic_Class:8><Flow_Label:20> */ @@ -324,10 +329,6 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) int err = -ENOSYS; u32 payload_len; - if (skb->encapsulation) { - skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IPV6)); - skb_set_inner_network_header(skb, nhoff); - } payload_len = skb->len - nhoff - sizeof(*iph); if (unlikely(payload_len > IPV6_MAXPLEN)) { @@ -341,6 +342,7 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) skb->len += hoplen; skb->mac_header -= hoplen; skb->network_header -= hoplen; + NAPI_GRO_CB(skb)->network_len += hoplen; iph = (struct ipv6hdr *)(skb->data + nhoff); hop_jumbo = (struct hop_jumbo_hdr *)(iph + 1); @@ -358,7 +360,15 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) iph->payload_len = htons(payload_len); } - nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); + if (skb->encapsulation) { + skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IPV6)); + skb_set_inner_network_header(skb, nhoff); + nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); + } else { + ops = rcu_dereference(inet6_offloads[NAPI_GRO_CB(skb)->transport_proto]); + nhoff += NAPI_GRO_CB(skb)->network_len; + } + if (WARN_ON(!ops || !ops->callbacks.gro_complete)) goto out;