[RFC,net-next,0/2] net: Use SMP threads for backlog NAPI.

Message ID 20230814093528.117342-1-bigeasy@linutronix.de
Headers
Series net: Use SMP threads for backlog NAPI. |

Message

Sebastian Andrzej Siewior Aug. 14, 2023, 9:35 a.m. UTC
  The RPS code and "deferred skb free" both send IPI/ function call
to a remote CPU in which a softirq is raised. This leads to a warning on
PREEMPT_RT because raising softiqrs from function call led to undesired
behaviour in the past. I had duct tape in RT for the "deferred skb free"
and Wander Lairson Costa reported the RPS case.

Patch #1 creates per-CPU threads for the backlog NAPI. It follows the
	 threaded NAPI model and solves the issue and simplifies the
         code.
Patch #2 gets rid of the warning. Since the ksoftirqd changes the
         situtation isn't as bad as it was. Still, it would be better to
	 keep it in the context where it originated.

Sebastian
  

Comments

Jakub Kicinski Aug. 14, 2023, 6:24 p.m. UTC | #1
On Mon, 14 Aug 2023 11:35:26 +0200 Sebastian Andrzej Siewior wrote:
> The RPS code and "deferred skb free" both send IPI/ function call
> to a remote CPU in which a softirq is raised. This leads to a warning on
> PREEMPT_RT because raising softiqrs from function call led to undesired
> behaviour in the past. I had duct tape in RT for the "deferred skb free"
> and Wander Lairson Costa reported the RPS case.

Could you find a less invasive solution?
backlog is used by veth == most containerized environments.
This change has a very high risk of regression for a lot of people.
  
Jakub Kicinski Aug. 17, 2023, 3:30 p.m. UTC | #2
On Thu, 17 Aug 2023 15:16:12 +0200 Sebastian Andrzej Siewior wrote:
> I've been looking at veth. In the xdp case it has its own NAPI instance.
> In the non-xdp it uses backlog. This should be called from
> ndo_start_xmit and user's write() so BH is off and interrupts are
> enabled at this point and it should be kind of rate-limited. Couldn't we
> bypass backlog in this case and deliver the packet directly to the
> stack?

The backlog in veth eats measurable percentage points of RPS of real
workloads, and I think number of people looked at getting rid of it.
So worthy goal for sure, but may not be a trivial fix.

To my knowledge the two main problems are:
 - we don't want to charge the sending application the processing for
   both "sides" of the connection and all the switching costs.
 - we may get an AA deadlock if the packet ends up looping in any way.

Or at least that's what I remember the problem being at 8am in the
morning :) Adding Daniel and Martin to CC, Paolo would also know this
better than me but I think he's AFK for the rest of the week.