[bpf-next,v1,0/2] xdp: recycle Page Pool backed skbs built from XDP frames

Message ID 20230301160315.1022488-1-aleksander.lobakin@intel.com
Headers
Series xdp: recycle Page Pool backed skbs built from XDP frames |

Message

Alexander Lobakin March 1, 2023, 4:03 p.m. UTC
  Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.

__xdp_build_skb_from_frame() missed the moment when the networking stack
became able to recycle skb pages backed by a Page Pool. This was making
e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
also affected in some scenarios.
A lot of drivers use skb_mark_for_recycle() already, it's been almost
two years and seems like there are no issues in using it in the generic
code too. {__,}xdp_release_frame() can be then removed as it losts its
last user.
Page Pool becomes then zero-alloc (or almost) in the abovementioned
cases, too. Other memory type models (who needs them at this point)
have no changes.

Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
IPv6 UDP:

Plain %XDP_PASS on baseline, Page Pool driver:

src cpu Rx     drops  dst cpu Rx
  2.1 Mpps       N/A    2.1 Mpps

cpumap redirect (w/o leaving its node) on baseline:

  6.8 Mpps  5.0 Mpps    1.8 Mpps

cpumap redirect with skb PP recycling:

  7.9 Mpps  5.7 Mpps    2.2 Mpps   +22%

Alexander Lobakin (2):
  xdp: recycle Page Pool backed skbs built from XDP frames
  xdp: remove unused {__,}xdp_release_frame()

 include/net/xdp.h | 29 -----------------------------
 net/core/xdp.c    | 19 ++-----------------
 2 files changed, 2 insertions(+), 46 deletions(-)
  

Comments

Jesper Dangaard Brouer March 3, 2023, 10:39 a.m. UTC | #1
On 01/03/2023 17.03, Alexander Lobakin wrote:
> Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.
> 
> __xdp_build_skb_from_frame() missed the moment when the networking stack
> became able to recycle skb pages backed by a Page Pool. This was making
                                                ^^^^^^^^^
When talking about page_pool, can we write "page_pool" instead of
capitalized "Page Pool", please. I looked through the git log, and here
we all used "page_pool".

> e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
> also affected in some scenarios.

Thanks for working on closing this gap :-)

> A lot of drivers use skb_mark_for_recycle() already, it's been almost
> two years and seems like there are no issues in using it in the generic
> code too. {__,}xdp_release_frame() can be then removed as it losts its
> last user.
> Page Pool becomes then zero-alloc (or almost) in the abovementioned
> cases, too. Other memory type models (who needs them at this point)
> have no changes.
> 
> Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
> IPv6 UDP:

What NIC driver?

> 
> Plain %XDP_PASS on baseline, Page Pool driver:
> 
> src cpu Rx     drops  dst cpu Rx
>    2.1 Mpps       N/A    2.1 Mpps
> 
> cpumap redirect (w/o leaving its node) on baseline:
> 
>    6.8 Mpps  5.0 Mpps    1.8 Mpps
> 
> cpumap redirect with skb PP recycling:
> 
>    7.9 Mpps  5.7 Mpps    2.2 Mpps   +22%
> 

It is of cause awesome, that cpumap SKBs are faster than normal SKB path.
I do wonder where the +22% number comes from?

> Alexander Lobakin (2):
>    xdp: recycle Page Pool backed skbs built from XDP frames
>    xdp: remove unused {__,}xdp_release_frame()
> 
>   include/net/xdp.h | 29 -----------------------------
>   net/core/xdp.c    | 19 ++-----------------
>   2 files changed, 2 insertions(+), 46 deletions(-)
>
  
Alexander Lobakin March 3, 2023, 11:31 a.m. UTC | #2
From: Jesper Dangaard Brouer <jbrouer@redhat.com>
Date: Fri, 3 Mar 2023 11:39:06 +0100

> 
> On 01/03/2023 17.03, Alexander Lobakin wrote:
>> Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.
>>
>> __xdp_build_skb_from_frame() missed the moment when the networking stack
>> became able to recycle skb pages backed by a Page Pool. This was making
>                                                ^^^^^^^^^
> When talking about page_pool, can we write "page_pool" instead of
> capitalized "Page Pool", please. I looked through the git log, and here
> we all used "page_pool".

Ah okay, no prob :D Yeah, that's probably more correct. "Page Pool" is
the name of the API, while page_pool is an entity we create via
page_pool_create().

> 
>> e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
>> also affected in some scenarios.
> 
> Thanks for working on closing this gap :-)
> 
>> A lot of drivers use skb_mark_for_recycle() already, it's been almost
>> two years and seems like there are no issues in using it in the generic
>> code too. {__,}xdp_release_frame() can be then removed as it losts its
>> last user.
>> Page Pool becomes then zero-alloc (or almost) in the abovementioned
>> cases, too. Other memory type models (who needs them at this point)
>> have no changes.
>>
>> Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
>> IPv6 UDP:
> 
> What NIC driver?

IAVF with XDP, the series adding XDP support will be sent in a couple
weeks, WIP can be found on my open GH[0].

> 
>>
>> Plain %XDP_PASS on baseline, Page Pool driver:
>>
>> src cpu Rx     drops  dst cpu Rx
>>    2.1 Mpps       N/A    2.1 Mpps
>>
>> cpumap redirect (w/o leaving its node) on baseline:
>>
>>    6.8 Mpps  5.0 Mpps    1.8 Mpps
>>
>> cpumap redirect with skb PP recycling:
>>
>>    7.9 Mpps  5.7 Mpps    2.2 Mpps   +22%
>>
> 
> It is of cause awesome, that cpumap SKBs are faster than normal SKB path.

That's the point of cpumap redirect, right? You separate NAPI poll / IRQ
handling from the skb networking stack traveling to a different CPU,
including page freeing (or recycling). That takes a lot of load from the
source CPU. 0.1 Mpps is not the highest difference I got, cpumap
redirect can boost up to 0.5 Mpps IIRC.

> I do wonder where the +22% number comes from?

(2.2 - 1.8) / 1.8 * 100%. I compare baseline cpumap redirect
before/after here :)

> 
>> Alexander Lobakin (2):
>>    xdp: recycle Page Pool backed skbs built from XDP frames
>>    xdp: remove unused {__,}xdp_release_frame()
>>
>>   include/net/xdp.h | 29 -----------------------------
>>   net/core/xdp.c    | 19 ++-----------------
>>   2 files changed, 2 insertions(+), 46 deletions(-)
>>
> 

There's a build failure on non-PP systems due to skb_mark_for_recycle()
being declared only when CONFIG_PAGE_POOL is set. I'll spin v2 in a bit.

[0] https://github.com/alobakin/linux/commits/iavf-xdp

Thanks,
Olek