[net-next,v4,5/6] page_pool: add a lockdep check for recycling in hardirq

Message ID 20230804180529.2483231-6-aleksander.lobakin@intel.com
State New
Headers
Series page_pool: a couple of assorted optimizations |

Commit Message

Alexander Lobakin Aug. 4, 2023, 6:05 p.m. UTC
  From: Jakub Kicinski <kuba@kernel.org>

Page pool use in hardirq is prohibited, add debug checks
to catch misuses. IIRC we previously discussed using
DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
that people will have DEBUG_NET enabled in perf testing.
I don't think anyone enables lockdep in perf testing,
so use lockdep to avoid pushback and arguing :)

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
 include/linux/lockdep.h | 7 +++++++
 net/core/page_pool.c    | 2 ++
 2 files changed, 9 insertions(+)
  

Comments

Alexander Duyck Aug. 7, 2023, 2:48 p.m. UTC | #1
On Fri, 2023-08-04 at 20:05 +0200, Alexander Lobakin wrote:
> From: Jakub Kicinski <kuba@kernel.org>
> 
> Page pool use in hardirq is prohibited, add debug checks
> to catch misuses. IIRC we previously discussed using
> DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
> that people will have DEBUG_NET enabled in perf testing.
> I don't think anyone enables lockdep in perf testing,
> so use lockdep to avoid pushback and arguing :)
> 
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
>  include/linux/lockdep.h | 7 +++++++
>  net/core/page_pool.c    | 2 ++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index 310f85903c91..dc2844b071c2 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -625,6 +625,12 @@ do {									\
>  	WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
>  } while (0)
>  
> +#define lockdep_assert_no_hardirq()					\
> +do {									\
> +	WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
> +					   !this_cpu_read(hardirqs_enabled))); \
> +} while (0)
> +
>  #define lockdep_assert_preemption_enabled()				\
>  do {									\
>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
> @@ -659,6 +665,7 @@ do {									\
>  # define lockdep_assert_irqs_enabled() do { } while (0)
>  # define lockdep_assert_irqs_disabled() do { } while (0)
>  # define lockdep_assert_in_irq() do { } while (0)
> +# define lockdep_assert_no_hardirq() do { } while (0)
>  
>  # define lockdep_assert_preemption_enabled() do { } while (0)
>  # define lockdep_assert_preemption_disabled() do { } while (0)
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 03ad74d25959..77cb75e63aca 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -587,6 +587,8 @@ static __always_inline struct page *
>  __page_pool_put_page(struct page_pool *pool, struct page *page,
>  		     unsigned int dma_sync_size, bool allow_direct)
>  {
> +	lockdep_assert_no_hardirq();
> +
>  	/* This allocator is optimized for the XDP mode that uses
>  	 * one-frame-per-page, but have fallbacks that act like the
>  	 * regular page allocator APIs.

So two points.

First could we look at moving this inside the if statement just before
we return the page, as there isn't a risk until we get into that path
of needing a lock.

Secondly rather than returning an error is there any reason why we
couldn't just look at not returning page and instead just drop into the
release path which wouldn't take the locks in the first place? Either
that or I would even be good with some combination of the two where we
threw a warning, but still just dropped the page so we reduce our risk
further of actually locking things up.
  
Alexander Lobakin Aug. 8, 2023, 1:16 p.m. UTC | #2
From: Alexander H Duyck <alexander.duyck@gmail.com>
Date: Mon, 07 Aug 2023 07:48:54 -0700

> On Fri, 2023-08-04 at 20:05 +0200, Alexander Lobakin wrote:
>> From: Jakub Kicinski <kuba@kernel.org>
>>
>> Page pool use in hardirq is prohibited, add debug checks
>> to catch misuses. IIRC we previously discussed using
>> DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
>> that people will have DEBUG_NET enabled in perf testing.
>> I don't think anyone enables lockdep in perf testing,
>> so use lockdep to avoid pushback and arguing :)
>>
>> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
>> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
>> ---
>>  include/linux/lockdep.h | 7 +++++++
>>  net/core/page_pool.c    | 2 ++
>>  2 files changed, 9 insertions(+)
>>
>> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
>> index 310f85903c91..dc2844b071c2 100644
>> --- a/include/linux/lockdep.h
>> +++ b/include/linux/lockdep.h
>> @@ -625,6 +625,12 @@ do {									\
>>  	WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
>>  } while (0)
>>  
>> +#define lockdep_assert_no_hardirq()					\
>> +do {									\
>> +	WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
>> +					   !this_cpu_read(hardirqs_enabled))); \
>> +} while (0)
>> +
>>  #define lockdep_assert_preemption_enabled()				\
>>  do {									\
>>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
>> @@ -659,6 +665,7 @@ do {									\
>>  # define lockdep_assert_irqs_enabled() do { } while (0)
>>  # define lockdep_assert_irqs_disabled() do { } while (0)
>>  # define lockdep_assert_in_irq() do { } while (0)
>> +# define lockdep_assert_no_hardirq() do { } while (0)
>>  
>>  # define lockdep_assert_preemption_enabled() do { } while (0)
>>  # define lockdep_assert_preemption_disabled() do { } while (0)
>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>> index 03ad74d25959..77cb75e63aca 100644
>> --- a/net/core/page_pool.c
>> +++ b/net/core/page_pool.c
>> @@ -587,6 +587,8 @@ static __always_inline struct page *
>>  __page_pool_put_page(struct page_pool *pool, struct page *page,
>>  		     unsigned int dma_sync_size, bool allow_direct)
>>  {
>> +	lockdep_assert_no_hardirq();
>> +
>>  	/* This allocator is optimized for the XDP mode that uses
>>  	 * one-frame-per-page, but have fallbacks that act like the
>>  	 * regular page allocator APIs.
> 
> So two points.
> 
> First could we look at moving this inside the if statement just before
> we return the page, as there isn't a risk until we get into that path
> of needing a lock.
> 
> Secondly rather than returning an error is there any reason why we
> couldn't just look at not returning page and instead just drop into the
> release path which wouldn't take the locks in the first place? Either

That is exception path to quickly catch broken drivers and fix them, why
bother? It's not something we have to live with.

> that or I would even be good with some combination of the two where we
> threw a warning, but still just dropped the page so we reduce our risk
> further of actually locking things up.

Thanks,
Olek
  
Alexander Duyck Aug. 8, 2023, 1:45 p.m. UTC | #3
On Tue, Aug 8, 2023 at 6:16 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Alexander H Duyck <alexander.duyck@gmail.com>
> Date: Mon, 07 Aug 2023 07:48:54 -0700
>
> > On Fri, 2023-08-04 at 20:05 +0200, Alexander Lobakin wrote:
> >> From: Jakub Kicinski <kuba@kernel.org>
> >>
> >> Page pool use in hardirq is prohibited, add debug checks
> >> to catch misuses. IIRC we previously discussed using
> >> DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
> >> that people will have DEBUG_NET enabled in perf testing.
> >> I don't think anyone enables lockdep in perf testing,
> >> so use lockdep to avoid pushback and arguing :)
> >>
> >> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> >> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
> >> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> >> ---
> >>  include/linux/lockdep.h | 7 +++++++
> >>  net/core/page_pool.c    | 2 ++
> >>  2 files changed, 9 insertions(+)
> >>
> >> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> >> index 310f85903c91..dc2844b071c2 100644
> >> --- a/include/linux/lockdep.h
> >> +++ b/include/linux/lockdep.h
> >> @@ -625,6 +625,12 @@ do {                                                                    \
> >>      WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
> >>  } while (0)
> >>
> >> +#define lockdep_assert_no_hardirq()                                 \
> >> +do {                                                                        \
> >> +    WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
> >> +                                       !this_cpu_read(hardirqs_enabled))); \
> >> +} while (0)
> >> +
> >>  #define lockdep_assert_preemption_enabled()                         \
> >>  do {                                                                        \
> >>      WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)   &&              \
> >> @@ -659,6 +665,7 @@ do {                                                                     \
> >>  # define lockdep_assert_irqs_enabled() do { } while (0)
> >>  # define lockdep_assert_irqs_disabled() do { } while (0)
> >>  # define lockdep_assert_in_irq() do { } while (0)
> >> +# define lockdep_assert_no_hardirq() do { } while (0)
> >>
> >>  # define lockdep_assert_preemption_enabled() do { } while (0)
> >>  # define lockdep_assert_preemption_disabled() do { } while (0)
> >> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> >> index 03ad74d25959..77cb75e63aca 100644
> >> --- a/net/core/page_pool.c
> >> +++ b/net/core/page_pool.c
> >> @@ -587,6 +587,8 @@ static __always_inline struct page *
> >>  __page_pool_put_page(struct page_pool *pool, struct page *page,
> >>                   unsigned int dma_sync_size, bool allow_direct)
> >>  {
> >> +    lockdep_assert_no_hardirq();
> >> +
> >>      /* This allocator is optimized for the XDP mode that uses
> >>       * one-frame-per-page, but have fallbacks that act like the
> >>       * regular page allocator APIs.
> >
> > So two points.
> >
> > First could we look at moving this inside the if statement just before
> > we return the page, as there isn't a risk until we get into that path
> > of needing a lock.
> >
> > Secondly rather than returning an error is there any reason why we
> > couldn't just look at not returning page and instead just drop into the
> > release path which wouldn't take the locks in the first place? Either
>
> That is exception path to quickly catch broken drivers and fix them, why
> bother? It's not something we have to live with.

My concern is that the current "fix" consists of stalling a Tx ring.
We need to have a way to allow forward progress when somebody mixes
xdp_frame and skb traffic as I suspect we will end up with a number of
devices doing this since they cannot handle recycling the pages in
hardirq context.

The only reason why the skbs don't have the problem is that they are
queued and then cleaned up in the net_tx_action. That is why I wonder
if we shouldn't look at adding some sort of support for doing
something like that with xdp_frame as well. Something like a
dev_kfree_pp_page_any to go along with the dev_kfree_skb_any.
  
Alexander Lobakin Aug. 8, 2023, 1:58 p.m. UTC | #4
From: Alexander Duyck <alexander.duyck@gmail.com>
Date: Tue, 8 Aug 2023 06:45:26 -0700

> On Tue, Aug 8, 2023 at 6:16 AM Alexander Lobakin
> <aleksander.lobakin@intel.com> wrote:
>>
>> From: Alexander H Duyck <alexander.duyck@gmail.com>
>> Date: Mon, 07 Aug 2023 07:48:54 -0700
>>
>>> On Fri, 2023-08-04 at 20:05 +0200, Alexander Lobakin wrote:
>>>> From: Jakub Kicinski <kuba@kernel.org>
>>>>
>>>> Page pool use in hardirq is prohibited, add debug checks
>>>> to catch misuses. IIRC we previously discussed using
>>>> DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
>>>> that people will have DEBUG_NET enabled in perf testing.
>>>> I don't think anyone enables lockdep in perf testing,
>>>> so use lockdep to avoid pushback and arguing :)
>>>>
>>>> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
>>>> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
>>>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
>>>> ---
>>>>  include/linux/lockdep.h | 7 +++++++
>>>>  net/core/page_pool.c    | 2 ++
>>>>  2 files changed, 9 insertions(+)
>>>>
>>>> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
>>>> index 310f85903c91..dc2844b071c2 100644
>>>> --- a/include/linux/lockdep.h
>>>> +++ b/include/linux/lockdep.h
>>>> @@ -625,6 +625,12 @@ do {                                                                    \
>>>>      WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
>>>>  } while (0)
>>>>
>>>> +#define lockdep_assert_no_hardirq()                                 \
>>>> +do {                                                                        \
>>>> +    WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
>>>> +                                       !this_cpu_read(hardirqs_enabled))); \
>>>> +} while (0)
>>>> +
>>>>  #define lockdep_assert_preemption_enabled()                         \
>>>>  do {                                                                        \
>>>>      WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)   &&              \
>>>> @@ -659,6 +665,7 @@ do {                                                                     \
>>>>  # define lockdep_assert_irqs_enabled() do { } while (0)
>>>>  # define lockdep_assert_irqs_disabled() do { } while (0)
>>>>  # define lockdep_assert_in_irq() do { } while (0)
>>>> +# define lockdep_assert_no_hardirq() do { } while (0)
>>>>
>>>>  # define lockdep_assert_preemption_enabled() do { } while (0)
>>>>  # define lockdep_assert_preemption_disabled() do { } while (0)
>>>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>>>> index 03ad74d25959..77cb75e63aca 100644
>>>> --- a/net/core/page_pool.c
>>>> +++ b/net/core/page_pool.c
>>>> @@ -587,6 +587,8 @@ static __always_inline struct page *
>>>>  __page_pool_put_page(struct page_pool *pool, struct page *page,
>>>>                   unsigned int dma_sync_size, bool allow_direct)
>>>>  {
>>>> +    lockdep_assert_no_hardirq();
>>>> +
>>>>      /* This allocator is optimized for the XDP mode that uses
>>>>       * one-frame-per-page, but have fallbacks that act like the
>>>>       * regular page allocator APIs.
>>>
>>> So two points.
>>>
>>> First could we look at moving this inside the if statement just before
>>> we return the page, as there isn't a risk until we get into that path
>>> of needing a lock.
>>>
>>> Secondly rather than returning an error is there any reason why we
>>> couldn't just look at not returning page and instead just drop into the
>>> release path which wouldn't take the locks in the first place? Either
>>
>> That is exception path to quickly catch broken drivers and fix them, why
>> bother? It's not something we have to live with.
> 
> My concern is that the current "fix" consists of stalling a Tx ring.
> We need to have a way to allow forward progress when somebody mixes
> xdp_frame and skb traffic as I suspect we will end up with a number of
> devices doing this since they cannot handle recycling the pages in
> hardirq context.

You could've seen that several vendors already disabled recycling XDP
buffers when in hardirq (= netpoll) in their drivers. hardirq is in
general not for networking-related operations.

> 
> The only reason why the skbs don't have the problem is that they are
> queued and then cleaned up in the net_tx_action. That is why I wonder
> if we shouldn't look at adding some sort of support for doing
> something like that with xdp_frame as well. Something like a
> dev_kfree_pp_page_any to go along with the dev_kfree_skb_any.

I still don't get why we may need to clean XDP buffers in hardirq, maybe
someone could give me some links to read why we may need this and how
that happens? netpoll is a very specific thing for some debug
operations, isn't it? XDP shouldn't in general be enabled when this
happens, should it?

(unrelated: 6:58 AM West Coast, you use to wake up early or traveling?
 :D)

Thanks,
Olek
  
Alexander Duyck Aug. 8, 2023, 2:52 p.m. UTC | #5
On Tue, Aug 8, 2023 at 6:59 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Alexander Duyck <alexander.duyck@gmail.com>
> Date: Tue, 8 Aug 2023 06:45:26 -0700
>
> > On Tue, Aug 8, 2023 at 6:16 AM Alexander Lobakin
> > <aleksander.lobakin@intel.com> wrote:
> >>
> >> From: Alexander H Duyck <alexander.duyck@gmail.com>
> >> Date: Mon, 07 Aug 2023 07:48:54 -0700
> >>
> >>> On Fri, 2023-08-04 at 20:05 +0200, Alexander Lobakin wrote:
> >>>> From: Jakub Kicinski <kuba@kernel.org>
> >>>>
> >>>> Page pool use in hardirq is prohibited, add debug checks
> >>>> to catch misuses. IIRC we previously discussed using
> >>>> DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
> >>>> that people will have DEBUG_NET enabled in perf testing.
> >>>> I don't think anyone enables lockdep in perf testing,
> >>>> so use lockdep to avoid pushback and arguing :)
> >>>>
> >>>> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> >>>> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
> >>>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> >>>> ---
> >>>>  include/linux/lockdep.h | 7 +++++++
> >>>>  net/core/page_pool.c    | 2 ++
> >>>>  2 files changed, 9 insertions(+)
> >>>>
> >>>> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> >>>> index 310f85903c91..dc2844b071c2 100644
> >>>> --- a/include/linux/lockdep.h
> >>>> +++ b/include/linux/lockdep.h
> >>>> @@ -625,6 +625,12 @@ do {                                                                    \
> >>>>      WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
> >>>>  } while (0)
> >>>>
> >>>> +#define lockdep_assert_no_hardirq()                                 \
> >>>> +do {                                                                        \
> >>>> +    WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
> >>>> +                                       !this_cpu_read(hardirqs_enabled))); \
> >>>> +} while (0)
> >>>> +
> >>>>  #define lockdep_assert_preemption_enabled()                         \
> >>>>  do {                                                                        \
> >>>>      WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)   &&              \
> >>>> @@ -659,6 +665,7 @@ do {                                                                     \
> >>>>  # define lockdep_assert_irqs_enabled() do { } while (0)
> >>>>  # define lockdep_assert_irqs_disabled() do { } while (0)
> >>>>  # define lockdep_assert_in_irq() do { } while (0)
> >>>> +# define lockdep_assert_no_hardirq() do { } while (0)
> >>>>
> >>>>  # define lockdep_assert_preemption_enabled() do { } while (0)
> >>>>  # define lockdep_assert_preemption_disabled() do { } while (0)
> >>>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> >>>> index 03ad74d25959..77cb75e63aca 100644
> >>>> --- a/net/core/page_pool.c
> >>>> +++ b/net/core/page_pool.c
> >>>> @@ -587,6 +587,8 @@ static __always_inline struct page *
> >>>>  __page_pool_put_page(struct page_pool *pool, struct page *page,
> >>>>                   unsigned int dma_sync_size, bool allow_direct)
> >>>>  {
> >>>> +    lockdep_assert_no_hardirq();
> >>>> +
> >>>>      /* This allocator is optimized for the XDP mode that uses
> >>>>       * one-frame-per-page, but have fallbacks that act like the
> >>>>       * regular page allocator APIs.
> >>>
> >>> So two points.
> >>>
> >>> First could we look at moving this inside the if statement just before
> >>> we return the page, as there isn't a risk until we get into that path
> >>> of needing a lock.
> >>>
> >>> Secondly rather than returning an error is there any reason why we
> >>> couldn't just look at not returning page and instead just drop into the
> >>> release path which wouldn't take the locks in the first place? Either
> >>
> >> That is exception path to quickly catch broken drivers and fix them, why
> >> bother? It's not something we have to live with.
> >
> > My concern is that the current "fix" consists of stalling a Tx ring.
> > We need to have a way to allow forward progress when somebody mixes
> > xdp_frame and skb traffic as I suspect we will end up with a number of
> > devices doing this since they cannot handle recycling the pages in
> > hardirq context.
>
> You could've seen that several vendors already disabled recycling XDP
> buffers when in hardirq (= netpoll) in their drivers. hardirq is in
> general not for networking-related operations.

The whole idea behind the netpoll cleanup is to get the Tx buffers out
of the way so that we can transmit even after the system has crashed.
The idea isn't to transmit XDP buffers, but to get the buffers out of
the way in the cases where somebody is combining both xdp_frame and
sk_buff on the same queue due to a limited number of rings being
present on the device.

My concern is that at some point in the near future somebody is going
to have a system crash and instead of being able to get the crash log
message out via their netconsole it is going to get cut off because
the driver stopped cleaning the Tx ring because somebody was also
using it as an XDP redirect destination.

> >
> > The only reason why the skbs don't have the problem is that they are
> > queued and then cleaned up in the net_tx_action. That is why I wonder
> > if we shouldn't look at adding some sort of support for doing
> > something like that with xdp_frame as well. Something like a
> > dev_kfree_pp_page_any to go along with the dev_kfree_skb_any.
>
> I still don't get why we may need to clean XDP buffers in hardirq, maybe
> someone could give me some links to read why we may need this and how
> that happens? netpoll is a very specific thing for some debug
> operations, isn't it? XDP shouldn't in general be enabled when this
> happens, should it?

I think I kind of explained it above. It isn't so much about cleaning
the XDP buffers as getting them off of the ring and out of the way. If
we block a Tx queue because of an XDP buffer then we cannot use that
Tx queue. I would be good with us just deferring the cleanup like we
do with an sk_buff in dev_kfree_skb_irq, the only issue is we don't
have the ability to put them on a queue since they don't have
prev/next pointers.

I suppose an alternative to cleaning them might be to make a mandatory
requirement that you cannot support netpoll and mix xdp_frame and
sk_buff on the same queue. If we enforced that then my concern about
them blocking a queue would be addressed.

> (unrelated: 6:58 AM West Coast, you use to wake up early or traveling?
>  :D)

I am usually up pretty early, especially this time of year. Sunrise
here is 6AM and I am usually up a little before that.. :)
  
Alexander Lobakin Aug. 8, 2023, 3:06 p.m. UTC | #6
From: Alexander Duyck <alexander.duyck@gmail.com>
Date: Tue, 8 Aug 2023 07:52:32 -0700

> On Tue, Aug 8, 2023 at 6:59 AM Alexander Lobakin
> <aleksander.lobakin@intel.com> wrote:
>>
>> From: Alexander Duyck <alexander.duyck@gmail.com>
>> Date: Tue, 8 Aug 2023 06:45:26 -0700

[...]

>>>>> Secondly rather than returning an error is there any reason why we
>>>>> couldn't just look at not returning page and instead just drop into the
>>>>> release path which wouldn't take the locks in the first place? Either
>>>>
>>>> That is exception path to quickly catch broken drivers and fix them, why
>>>> bother? It's not something we have to live with.
>>>
>>> My concern is that the current "fix" consists of stalling a Tx ring.
>>> We need to have a way to allow forward progress when somebody mixes
>>> xdp_frame and skb traffic as I suspect we will end up with a number of
>>> devices doing this since they cannot handle recycling the pages in
>>> hardirq context.
>>
>> You could've seen that several vendors already disabled recycling XDP
>> buffers when in hardirq (= netpoll) in their drivers. hardirq is in
>> general not for networking-related operations.
> 
> The whole idea behind the netpoll cleanup is to get the Tx buffers out
> of the way so that we can transmit even after the system has crashed.
> The idea isn't to transmit XDP buffers, but to get the buffers out of
> the way in the cases where somebody is combining both xdp_frame and
> sk_buff on the same queue due to a limited number of rings being
> present on the device.

I see now, thanks a lot!

> 
> My concern is that at some point in the near future somebody is going
> to have a system crash and instead of being able to get the crash log
> message out via their netconsole it is going to get cut off because
> the driver stopped cleaning the Tx ring because somebody was also
> using it as an XDP redirect destination.
> 
>>>
>>> The only reason why the skbs don't have the problem is that they are
>>> queued and then cleaned up in the net_tx_action. That is why I wonder
>>> if we shouldn't look at adding some sort of support for doing
>>> something like that with xdp_frame as well. Something like a
>>> dev_kfree_pp_page_any to go along with the dev_kfree_skb_any.
>>
>> I still don't get why we may need to clean XDP buffers in hardirq, maybe
>> someone could give me some links to read why we may need this and how
>> that happens? netpoll is a very specific thing for some debug
>> operations, isn't it? XDP shouldn't in general be enabled when this
>> happens, should it?
> 
> I think I kind of explained it above. It isn't so much about cleaning
> the XDP buffers as getting them off of the ring and out of the way. If
> we block a Tx queue because of an XDP buffer then we cannot use that
> Tx queue. I would be good with us just deferring the cleanup like we
> do with an sk_buff in dev_kfree_skb_irq, the only issue is we don't
> have the ability to put them on a queue since they don't have
> prev/next pointers.
> 
> I suppose an alternative to cleaning them might be to make a mandatory
> requirement that you cannot support netpoll and mix xdp_frame and
> sk_buff on the same queue. If we enforced that then my concern about
> them blocking a queue would be addressed.

I'm leaning more towards this one TBH. I don't feel sole netpoll as
a solid argument for introducing XDP frame deferred queues :s

> 
>> (unrelated: 6:58 AM West Coast, you use to wake up early or traveling?
>>  :D)
> 
> I am usually up pretty early, especially this time of year. Sunrise
> here is 6AM and I am usually up a little before that.. :)

Nice!

Thanks,
Olek
  
Alexander Duyck Aug. 8, 2023, 5:35 p.m. UTC | #7
On Tue, Aug 8, 2023 at 8:06 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Alexander Duyck <alexander.duyck@gmail.com>
> Date: Tue, 8 Aug 2023 07:52:32 -0700
>
> > On Tue, Aug 8, 2023 at 6:59 AM Alexander Lobakin
> > <aleksander.lobakin@intel.com> wrote:
> >>
> >> From: Alexander Duyck <alexander.duyck@gmail.com>
> >> Date: Tue, 8 Aug 2023 06:45:26 -0700
>
> [...]
>
> >>>>> Secondly rather than returning an error is there any reason why we
> >>>>> couldn't just look at not returning page and instead just drop into the
> >>>>> release path which wouldn't take the locks in the first place? Either
> >>>>
> >>>> That is exception path to quickly catch broken drivers and fix them, why
> >>>> bother? It's not something we have to live with.
> >>>
> >>> My concern is that the current "fix" consists of stalling a Tx ring.
> >>> We need to have a way to allow forward progress when somebody mixes
> >>> xdp_frame and skb traffic as I suspect we will end up with a number of
> >>> devices doing this since they cannot handle recycling the pages in
> >>> hardirq context.
> >>
> >> You could've seen that several vendors already disabled recycling XDP
> >> buffers when in hardirq (= netpoll) in their drivers. hardirq is in
> >> general not for networking-related operations.
> >
> > The whole idea behind the netpoll cleanup is to get the Tx buffers out
> > of the way so that we can transmit even after the system has crashed.
> > The idea isn't to transmit XDP buffers, but to get the buffers out of
> > the way in the cases where somebody is combining both xdp_frame and
> > sk_buff on the same queue due to a limited number of rings being
> > present on the device.
>
> I see now, thanks a lot!
>
> >
> > My concern is that at some point in the near future somebody is going
> > to have a system crash and instead of being able to get the crash log
> > message out via their netconsole it is going to get cut off because
> > the driver stopped cleaning the Tx ring because somebody was also
> > using it as an XDP redirect destination.
> >
> >>>
> >>> The only reason why the skbs don't have the problem is that they are
> >>> queued and then cleaned up in the net_tx_action. That is why I wonder
> >>> if we shouldn't look at adding some sort of support for doing
> >>> something like that with xdp_frame as well. Something like a
> >>> dev_kfree_pp_page_any to go along with the dev_kfree_skb_any.
> >>
> >> I still don't get why we may need to clean XDP buffers in hardirq, maybe
> >> someone could give me some links to read why we may need this and how
> >> that happens? netpoll is a very specific thing for some debug
> >> operations, isn't it? XDP shouldn't in general be enabled when this
> >> happens, should it?
> >
> > I think I kind of explained it above. It isn't so much about cleaning
> > the XDP buffers as getting them off of the ring and out of the way. If
> > we block a Tx queue because of an XDP buffer then we cannot use that
> > Tx queue. I would be good with us just deferring the cleanup like we
> > do with an sk_buff in dev_kfree_skb_irq, the only issue is we don't
> > have the ability to put them on a queue since they don't have
> > prev/next pointers.
> >
> > I suppose an alternative to cleaning them might be to make a mandatory
> > requirement that you cannot support netpoll and mix xdp_frame and
> > sk_buff on the same queue. If we enforced that then my concern about
> > them blocking a queue would be addressed.
>
> I'm leaning more towards this one TBH. I don't feel sole netpoll as
> a solid argument for introducing XDP frame deferred queues :s

That was kind of my line of thought as well. That is why I was
thinking that instead of bothering with a queue it might work just as
well to just throw all recycling out the window and just call put_page
if we are dealing with XDP in netpoll and just force it into the free
path. Then it becomes more of an "_any" type handler.
  

Patch

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 310f85903c91..dc2844b071c2 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -625,6 +625,12 @@  do {									\
 	WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
 } while (0)
 
+#define lockdep_assert_no_hardirq()					\
+do {									\
+	WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
+					   !this_cpu_read(hardirqs_enabled))); \
+} while (0)
+
 #define lockdep_assert_preemption_enabled()				\
 do {									\
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
@@ -659,6 +665,7 @@  do {									\
 # define lockdep_assert_irqs_enabled() do { } while (0)
 # define lockdep_assert_irqs_disabled() do { } while (0)
 # define lockdep_assert_in_irq() do { } while (0)
+# define lockdep_assert_no_hardirq() do { } while (0)
 
 # define lockdep_assert_preemption_enabled() do { } while (0)
 # define lockdep_assert_preemption_disabled() do { } while (0)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 03ad74d25959..77cb75e63aca 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -587,6 +587,8 @@  static __always_inline struct page *
 __page_pool_put_page(struct page_pool *pool, struct page *page,
 		     unsigned int dma_sync_size, bool allow_direct)
 {
+	lockdep_assert_no_hardirq();
+
 	/* This allocator is optimized for the XDP mode that uses
 	 * one-frame-per-page, but have fallbacks that act like the
 	 * regular page allocator APIs.