[RFC,net-next,1/2] net: Use SMP threads for backlog NAPI.

Message ID 20230814093528.117342-2-bigeasy@linutronix.de
State New
Headers
Series net: Use SMP threads for backlog NAPI. |

Commit Message

Sebastian Andrzej Siewior Aug. 14, 2023, 9:35 a.m. UTC
  Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
used by drivers which don't do NAPI them self and RPS.
The non-NAPI driver use the CPU local backlog NAPI. If RPS is enabled
then a flow for the skb is computed and based on the flow the skb can be
enqueued on a remote CPU. Scheduling/ raising the softirq (for backlog's
NAPI) on the remote CPU isn't trivial because the softirq is only
scheduled on the local CPU and performed after the hardirq is done.
In order to schedule a softirq on the remote CPU, an IPI is sent to the
remote CPU which schedules the backlog-NAPI on the then local CPU.

On PREEMPT_RT interrupts are force-threaded. The soft interrupts are
raised within the interrupt thread and processed after the interrupt
handler completed still within the context of the interrupt thread. The
softirq is handled in the context where it originated.

With force-threaded interrupts enabled, ksoftirqd is woken up if a
softirq is raised from hardirq context. This is the case if it is raised
from an IPI. Additionally there is a warning on PREEMPT_RT if the
softirq is raised from the idle thread.
This was done for two reasons:
- With threaded interrupts the processing should happen in thread
  context (where it originated) and ksoftirqd is the only thread for
  this context if raised from hardirq. Using the currently running task
  instead would "punish" a random task.
- Once ksoftirqd is active it consumes all further softirqs until it
  stops running. This changed recently and is no longer the case.

Instead of keeping the backlog NAPI in ksoftirqd (in force-threaded/
PREEMPT_RT setups) I am proposing NAPI-threads for backlog.
The "proper" setup with threaded-NAPI is not doable because the threads
are not pinned to an individual CPU and can be modified by the user.
Additionally a dummy network device would have to be assigned. Also
CPU-hotplug has to be considered if additional CPUs show up.
All this can be probably done/ solved but the smpboot-threads already
provide this infrastructure.

Create NAPI-threads for backlog. The thread runs the inner loop from
napi_threaded_poll(), the wait part is different.
Since there are now per-CPU threads for backlog the remote IPI for
signaling is not needed and can be removed. The NAPI for backlog can
always be scheduled as it ends in waking the corresponding thread.
Since "deferred skb free" use a similar IPI mechanism for signaling, it
is also using the backlog threads.

This makes NAPI threads mandatory for backlog and it can not be
disabled. The other visibile part with RPS (or backlog usage in general)
is that it becomes now visible in `top' while earlier it would remain
unaccounted.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/netdevice.h |   8 --
 net/core/dev.c            | 226 +++++++++++++-------------------------
 net/core/net-procfs.c     |   2 +-
 net/core/skbuff.c         |   2 +-
 4 files changed, 79 insertions(+), 159 deletions(-)
  

Comments

Sebastian Andrzej Siewior Sept. 20, 2023, 3:57 p.m. UTC | #1
On 2023-08-23 15:35:41 [+0200], Paolo Abeni wrote:
> On Mon, 2023-08-14 at 11:35 +0200, Sebastian Andrzej Siewior wrote:
> > @@ -4781,7 +4733,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
> >  		 * We can use non atomic operation since we own the queue lock
> >  		 */
> >  		if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state))
> > -			napi_schedule_rps(sd);
> > +			__napi_schedule_irqoff(&sd->backlog);
> >  		goto enqueue;
> >  	}
> >  	reason = SKB_DROP_REASON_CPU_BACKLOG;
> 
> I *think* that the above could be quite dangerous when cpu ==
> smp_processor_id() - that is, with plain veth usage.
> 
> Currently, each packet runs into the rx path just after
> enqueue_to_backlog()/tx completes.
> 
> With this patch there will be a burst effect, where the backlog thread
> will run after a few (several) packets will be enqueued, when the
> process scheduler will decide - note that the current CPU is already
> hosting a running process, the tx thread.
> 
> The above can cause packet drops (due to limited buffering) or very
> high latency (due to long burst), even in non overload situation, quite
> hard to debug.
> 
> I think the above needs to be an opt-in, but I guess that even RT
> deployments doing some packet forwarding will not be happy with this
> on.

I've been looking at this again and have been thinking what you said
here. I think part of the problem is that we lack a policy/ mechanism
when a DoS is happening and what to do.

Before commit d15121be74856 ("Revert "softirq: Let ksoftirqd do its
job"") when a lot of network packets are processed then processing is
moved to ksoftirqd and continues based on how the scheduler schedules
the SCHED_OTHER ksoftirqd task. This avoids lock-ups of the system and
it can do something else in between. Any interrupt will not continue the
outstanding softirq backlog but wait for ksoftirqd. So it basically
avoids the networking overload. It throttles the throughput if needed.

This isn't the case after that commit. Now, the CPU can be stuck with
processing networking packets if the packets come in fast enough. Even
if ksoftirqd is woken up, the next interrupt (say the timer) will
continue with at least one round.
By using NAPI-threads it is possible to give the control back to the
scheduler which can throttle the NAPI processing in favour of other
threads that ask for CPU. As you pointed out, waking the thread does not
guarantee that it will immediately do the NAPI work. It can be delayed
based on current load on the system.

This could be influenced by assigning the NAPI-thread a SCHED_FIFO
priority. Based on the priority it could be ensured that the thread
starts right away or "later" if something else is more important.
However, this opens the DoS window again: The scheduler will put the
NAPI thread on CPU as long as it asks for it with no throttling.

If we could somehow define a DoS condition once we are overwhelmed with
packets, then we could act on it and throttle it. This in turn would
allow a SCHED_FIFO priority without the fear of a lockup if the system
is flooded with packets.

> Cheers,
> 
> Paolo

Sebastian
  
Ferenc Fejes Sept. 21, 2023, 10:41 a.m. UTC | #2
Hi!

On Wed, 2023-09-20 at 17:57 +0200, Sebastian Andrzej Siewior wrote:
> On 2023-08-23 15:35:41 [+0200], Paolo Abeni wrote:
> > On Mon, 2023-08-14 at 11:35 +0200, Sebastian Andrzej Siewior wrote:
> > > @@ -4781,7 +4733,7 @@ static int enqueue_to_backlog(struct
> > > sk_buff *skb, int cpu,
> > >  		 * We can use non atomic operation since we own
> > > the queue lock
> > >  		 */
> > >  		if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd-
> > > >backlog.state))
> > > -			napi_schedule_rps(sd);
> > > +			__napi_schedule_irqoff(&sd->backlog);
> > >  		goto enqueue;
> > >  	}
> > >  	reason = SKB_DROP_REASON_CPU_BACKLOG;
> > 
> > I *think* that the above could be quite dangerous when cpu ==
> > smp_processor_id() - that is, with plain veth usage.
> > 
> > Currently, each packet runs into the rx path just after
> > enqueue_to_backlog()/tx completes.
> > 
> > With this patch there will be a burst effect, where the backlog
> > thread
> > will run after a few (several) packets will be enqueued, when the
> > process scheduler will decide - note that the current CPU is
> > already
> > hosting a running process, the tx thread.
> > 
> > The above can cause packet drops (due to limited buffering) or very
> > high latency (due to long burst), even in non overload situation,
> > quite
> > hard to debug.
> > 
> > I think the above needs to be an opt-in, but I guess that even RT
> > deployments doing some packet forwarding will not be happy with
> > this
> > on.
> 
> I've been looking at this again and have been thinking what you said
> here. I think part of the problem is that we lack a policy/ mechanism
> when a DoS is happening and what to do.
> 
> Before commit d15121be74856 ("Revert "softirq: Let ksoftirqd do its
> job"") when a lot of network packets are processed then processing is
> moved to ksoftirqd and continues based on how the scheduler schedules
> the SCHED_OTHER ksoftirqd task. This avoids lock-ups of the system
> and
> it can do something else in between. Any interrupt will not continue
> the
> outstanding softirq backlog but wait for ksoftirqd. So it basically
> avoids the networking overload. It throttles the throughput if
> needed.
> 
> This isn't the case after that commit. Now, the CPU can be stuck with
> processing networking packets if the packets come in fast enough.
> Even
> if ksoftirqd is woken up, the next interrupt (say the timer) will
> continue with at least one round.
> By using NAPI-threads it is possible to give the control back to the
> scheduler which can throttle the NAPI processing in favour of other
> threads that ask for CPU. As you pointed out, waking the thread does
> not
> guarantee that it will immediately do the NAPI work. It can be
> delayed
> based on current load on the system.
> 
> This could be influenced by assigning the NAPI-thread a SCHED_FIFO
> priority. Based on the priority it could be ensured that the thread
> starts right away or "later" if something else is more important.
> However, this opens the DoS window again: The scheduler will put the
> NAPI thread on CPU as long as it asks for it with no throttling.
> 
> If we could somehow define a DoS condition once we are overwhelmed
> with
> packets, then we could act on it and throttle it. This in turn would
> allow a SCHED_FIFO priority without the fear of a lockup if the
> system
> is flooded with packets.

Can this be avoided if we reuse gro_flush_timeout as the maximum time
the NAPI thread can be scheduled?

> 
> > Cheers,
> > 
> > Paolo
> 
> Sebastian

Ferenc
  
Sebastian Andrzej Siewior Sept. 22, 2023, 7:26 a.m. UTC | #3
On 2023-09-21 12:41:33 [+0200], Ferenc Fejes wrote:
> Hi!
Hi,

> > If we could somehow define a DoS condition once we are overwhelmed
> > with
> > packets, then we could act on it and throttle it. This in turn would
> > allow a SCHED_FIFO priority without the fear of a lockup if the
> > system
> > is flooded with packets.
> 
> Can this be avoided if we reuse gro_flush_timeout as the maximum time
> the NAPI thread can be scheduled?

First your run time needs to be accounted somehow. I observed that some
cards/ systems tend pull often a few packets on each interrupt and
others pull more packets at a time.
So probably packets in a time frame would make sense. Maybe even plus
packet size assuming larger packets require more processing time.

If you run at SCHED_OTHER you don't care, you can keep it running. With
SCHED_FIFO you would need to decide:
- how much is too much
- what to do once you reach too much

Once you reach too much you could:
- change the scheduling policy to SCHED_OTHER and keep going until it is
  no longer "too much in a given period" so you can flip it back.

- stop processing for a period of time and risk packet loss which is
  defined as better than to continue.

- pulling packets and dropping them instead of injecting into the stack.
  Using xdp/ebpf might be easy since there is an API for that. One could
  even peek at packets to decide if some can be kept.
  This would rely on the fact that the system can do this quick enough
  under a DoS condition.

> 
> Ferenc

Sebastian
  
Paolo Abeni Sept. 22, 2023, 9:38 a.m. UTC | #4
On Wed, 2023-09-20 at 17:57 +0200, Sebastian Andrzej Siewior wrote:
> On 2023-08-23 15:35:41 [+0200], Paolo Abeni wrote:
> > On Mon, 2023-08-14 at 11:35 +0200, Sebastian Andrzej Siewior wrote:
> > > @@ -4781,7 +4733,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
> > >  		 * We can use non atomic operation since we own the queue lock
> > >  		 */
> > >  		if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state))
> > > -			napi_schedule_rps(sd);
> > > +			__napi_schedule_irqoff(&sd->backlog);
> > >  		goto enqueue;
> > >  	}
> > >  	reason = SKB_DROP_REASON_CPU_BACKLOG;
> > 
> > I *think* that the above could be quite dangerous when cpu ==
> > smp_processor_id() - that is, with plain veth usage.
> > 
> > Currently, each packet runs into the rx path just after
> > enqueue_to_backlog()/tx completes.
> > 
> > With this patch there will be a burst effect, where the backlog thread
> > will run after a few (several) packets will be enqueued, when the
> > process scheduler will decide - note that the current CPU is already
> > hosting a running process, the tx thread.
> > 
> > The above can cause packet drops (due to limited buffering) or very
> > high latency (due to long burst), even in non overload situation, quite
> > hard to debug.
> > 
> > I think the above needs to be an opt-in, but I guess that even RT
> > deployments doing some packet forwarding will not be happy with this
> > on.
> 
> I've been looking at this again and have been thinking what you said
> here. I think part of the problem is that we lack a policy/ mechanism
> when a DoS is happening and what to do.
> 
> Before commit d15121be74856 ("Revert "softirq: Let ksoftirqd do its
> job"") when a lot of network packets are processed then processing is
> moved to ksoftirqd and continues based on how the scheduler schedules
> the SCHED_OTHER ksoftirqd task. This avoids lock-ups of the system and
> it can do something else in between. Any interrupt will not continue the
> outstanding softirq backlog but wait for ksoftirqd. So it basically
> avoids the networking overload. It throttles the throughput if needed.
> 
> This isn't the case after that commit. Now, the CPU can be stuck with
> processing networking packets if the packets come in fast enough. Even
> if ksoftirqd is woken up, the next interrupt (say the timer) will
> continue with at least one round.
> By using NAPI-threads it is possible to give the control back to the
> scheduler which can throttle the NAPI processing in favour of other
> threads that ask for CPU. As you pointed out, waking the thread does not
> guarantee that it will immediately do the NAPI work. It can be delayed
> based on current load on the system.
> 
> This could be influenced by assigning the NAPI-thread a SCHED_FIFO
> priority. Based on the priority it could be ensured that the thread
> starts right away or "later" if something else is more important.
> However, this opens the DoS window again: The scheduler will put the
> NAPI thread on CPU as long as it asks for it with no throttling.
> 
> If we could somehow define a DoS condition once we are overwhelmed with
> packets, then we could act on it and throttle it. This in turn would
> allow a SCHED_FIFO priority without the fear of a lockup if the system
> is flooded with packets.

I declare ENOCOFFEE before starting, be warned! 

I fear this is becoming a bit too theoretical, but we can infer a DoS
condition if the napi thread enqueues somewhere (socket buffer, qdisc,
tx ring, ???) a packet and the queue utilization is "high" (say > 75%
of max).

I have no idea how to throttle a FIFO thread retaining its priority.

More importantly, this kind of configuration is not really viable for a
generic !PREEMPT_RT build, while the concern I have with napi threaded
backlog/serving the backlog with ksoftirqd applies there.

Cheers,

Paolo
  

Patch

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 0896aaa91dd7b..17e31a68e725e 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3190,9 +3190,6 @@  struct softnet_data {
 	/* stats */
 	unsigned int		processed;
 	unsigned int		time_squeeze;
-#ifdef CONFIG_RPS
-	struct softnet_data	*rps_ipi_list;
-#endif
 
 	bool			in_net_rx_action;
 	bool			in_napi_threaded_poll;
@@ -3221,12 +3218,8 @@  struct softnet_data {
 	unsigned int		input_queue_head ____cacheline_aligned_in_smp;
 
 	/* Elements below can be accessed between CPUs for RPS/RFS */
-	call_single_data_t	csd ____cacheline_aligned_in_smp;
-	struct softnet_data	*rps_ipi_next;
-	unsigned int		cpu;
 	unsigned int		input_queue_tail;
 #endif
-	unsigned int		received_rps;
 	unsigned int		dropped;
 	struct sk_buff_head	input_pkt_queue;
 	struct napi_struct	backlog;
@@ -3236,7 +3229,6 @@  struct softnet_data {
 	int			defer_count;
 	int			defer_ipi_scheduled;
 	struct sk_buff		*defer_list;
-	call_single_data_t	defer_csd;
 };
 
 static inline void input_queue_head_incr(struct softnet_data *sd)
diff --git a/net/core/dev.c b/net/core/dev.c
index 636b41f0b32d6..40103238ac0a1 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -153,6 +153,7 @@ 
 #include <linux/prandom.h>
 #include <linux/once_lite.h>
 #include <net/netdev_rx_queue.h>
+#include <linux/smpboot.h>
 
 #include "dev.h"
 #include "net-sysfs.h"
@@ -4658,57 +4659,8 @@  bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
 EXPORT_SYMBOL(rps_may_expire_flow);
 
 #endif /* CONFIG_RFS_ACCEL */
-
-/* Called from hardirq (IPI) context */
-static void rps_trigger_softirq(void *data)
-{
-	struct softnet_data *sd = data;
-
-	____napi_schedule(sd, &sd->backlog);
-	sd->received_rps++;
-}
-
 #endif /* CONFIG_RPS */
 
-/* Called from hardirq (IPI) context */
-static void trigger_rx_softirq(void *data)
-{
-	struct softnet_data *sd = data;
-
-	__raise_softirq_irqoff(NET_RX_SOFTIRQ);
-	smp_store_release(&sd->defer_ipi_scheduled, 0);
-}
-
-/*
- * After we queued a packet into sd->input_pkt_queue,
- * we need to make sure this queue is serviced soon.
- *
- * - If this is another cpu queue, link it to our rps_ipi_list,
- *   and make sure we will process rps_ipi_list from net_rx_action().
- *
- * - If this is our own queue, NAPI schedule our backlog.
- *   Note that this also raises NET_RX_SOFTIRQ.
- */
-static void napi_schedule_rps(struct softnet_data *sd)
-{
-	struct softnet_data *mysd = this_cpu_ptr(&softnet_data);
-
-#ifdef CONFIG_RPS
-	if (sd != mysd) {
-		sd->rps_ipi_next = mysd->rps_ipi_list;
-		mysd->rps_ipi_list = sd;
-
-		/* If not called from net_rx_action() or napi_threaded_poll()
-		 * we have to raise NET_RX_SOFTIRQ.
-		 */
-		if (!mysd->in_net_rx_action && !mysd->in_napi_threaded_poll)
-			__raise_softirq_irqoff(NET_RX_SOFTIRQ);
-		return;
-	}
-#endif /* CONFIG_RPS */
-	__napi_schedule_irqoff(&mysd->backlog);
-}
-
 #ifdef CONFIG_NET_FLOW_LIMIT
 int netdev_flow_limit_table_len __read_mostly = (1 << 12);
 #endif
@@ -4781,7 +4733,7 @@  static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
 		 * We can use non atomic operation since we own the queue lock
 		 */
 		if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state))
-			napi_schedule_rps(sd);
+			__napi_schedule_irqoff(&sd->backlog);
 		goto enqueue;
 	}
 	reason = SKB_DROP_REASON_CPU_BACKLOG;
@@ -5896,63 +5848,12 @@  static void flush_all_backlogs(void)
 	cpus_read_unlock();
 }
 
-static void net_rps_send_ipi(struct softnet_data *remsd)
-{
-#ifdef CONFIG_RPS
-	while (remsd) {
-		struct softnet_data *next = remsd->rps_ipi_next;
-
-		if (cpu_online(remsd->cpu))
-			smp_call_function_single_async(remsd->cpu, &remsd->csd);
-		remsd = next;
-	}
-#endif
-}
-
-/*
- * net_rps_action_and_irq_enable sends any pending IPI's for rps.
- * Note: called with local irq disabled, but exits with local irq enabled.
- */
-static void net_rps_action_and_irq_enable(struct softnet_data *sd)
-{
-#ifdef CONFIG_RPS
-	struct softnet_data *remsd = sd->rps_ipi_list;
-
-	if (remsd) {
-		sd->rps_ipi_list = NULL;
-
-		local_irq_enable();
-
-		/* Send pending IPI's to kick RPS processing on remote cpus. */
-		net_rps_send_ipi(remsd);
-	} else
-#endif
-		local_irq_enable();
-}
-
-static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
-{
-#ifdef CONFIG_RPS
-	return sd->rps_ipi_list != NULL;
-#else
-	return false;
-#endif
-}
-
 static int process_backlog(struct napi_struct *napi, int quota)
 {
 	struct softnet_data *sd = container_of(napi, struct softnet_data, backlog);
 	bool again = true;
 	int work = 0;
 
-	/* Check if we have pending ipi, its better to send them now,
-	 * not waiting net_rx_action() end.
-	 */
-	if (sd_has_rps_ipi_waiting(sd)) {
-		local_irq_disable();
-		net_rps_action_and_irq_enable(sd);
-	}
-
 	napi->weight = READ_ONCE(dev_rx_weight);
 	while (again) {
 		struct sk_buff *skb;
@@ -5977,7 +5878,7 @@  static int process_backlog(struct napi_struct *napi, int quota)
 			 * We can use a plain write instead of clear_bit(),
 			 * and we dont need an smp_mb() memory barrier.
 			 */
-			napi->state = 0;
+			napi->state = BIT(NAPI_STATE_THREADED);
 			again = false;
 		} else {
 			skb_queue_splice_tail_init(&sd->input_pkt_queue,
@@ -6634,6 +6535,8 @@  static void skb_defer_free_flush(struct softnet_data *sd)
 	if (!READ_ONCE(sd->defer_list))
 		return;
 
+	smp_store_release(&sd->defer_ipi_scheduled, 0);
+
 	spin_lock(&sd->defer_lock);
 	skb = sd->defer_list;
 	sd->defer_list = NULL;
@@ -6647,39 +6550,42 @@  static void skb_defer_free_flush(struct softnet_data *sd)
 	}
 }
 
+static void napi_threaded_poll_loop(struct napi_struct *napi)
+{
+	struct softnet_data *sd;
+
+	for (;;) {
+		bool repoll = false;
+		void *have;
+
+		local_bh_disable();
+		sd = this_cpu_ptr(&softnet_data);
+		sd->in_napi_threaded_poll = true;
+
+		have = netpoll_poll_lock(napi);
+		__napi_poll(napi, &repoll);
+		netpoll_poll_unlock(have);
+
+		sd->in_napi_threaded_poll = false;
+		barrier();
+
+		skb_defer_free_flush(sd);
+		local_bh_enable();
+
+		if (!repoll)
+			break;
+
+		cond_resched();
+	}
+}
+
 static int napi_threaded_poll(void *data)
 {
 	struct napi_struct *napi = data;
-	struct softnet_data *sd;
-	void *have;
 
 	while (!napi_thread_wait(napi)) {
-		for (;;) {
-			bool repoll = false;
 
-			local_bh_disable();
-			sd = this_cpu_ptr(&softnet_data);
-			sd->in_napi_threaded_poll = true;
-
-			have = netpoll_poll_lock(napi);
-			__napi_poll(napi, &repoll);
-			netpoll_poll_unlock(have);
-
-			sd->in_napi_threaded_poll = false;
-			barrier();
-
-			if (sd_has_rps_ipi_waiting(sd)) {
-				local_irq_disable();
-				net_rps_action_and_irq_enable(sd);
-			}
-			skb_defer_free_flush(sd);
-			local_bh_enable();
-
-			if (!repoll)
-				break;
-
-			cond_resched();
-		}
+		napi_threaded_poll_loop(napi);
 	}
 	return 0;
 }
@@ -6714,8 +6620,6 @@  static __latent_entropy void net_rx_action(struct softirq_action *h)
 				 */
 				if (!list_empty(&sd->poll_list))
 					goto start;
-				if (!sd_has_rps_ipi_waiting(sd))
-					goto end;
 			}
 			break;
 		}
@@ -6744,8 +6648,7 @@  static __latent_entropy void net_rx_action(struct softirq_action *h)
 	else
 		sd->in_net_rx_action = false;
 
-	net_rps_action_and_irq_enable(sd);
-end:;
+	local_irq_enable();
 }
 
 struct netdev_adjacent {
@@ -11157,7 +11060,7 @@  static int dev_cpu_dead(unsigned int oldcpu)
 	struct sk_buff **list_skb;
 	struct sk_buff *skb;
 	unsigned int cpu;
-	struct softnet_data *sd, *oldsd, *remsd = NULL;
+	struct softnet_data *sd, *oldsd;
 
 	local_irq_disable();
 	cpu = smp_processor_id();
@@ -11189,22 +11092,13 @@  static int dev_cpu_dead(unsigned int oldcpu)
 							    poll_list);
 
 		list_del_init(&napi->poll_list);
-		if (napi->poll == process_backlog)
-			napi->state = 0;
-		else
+		if (!WARN_ON(napi->poll == process_backlog))
 			____napi_schedule(sd, napi);
 	}
 
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_enable();
 
-#ifdef CONFIG_RPS
-	remsd = oldsd->rps_ipi_list;
-	oldsd->rps_ipi_list = NULL;
-#endif
-	/* send out pending IPI's on offline CPU */
-	net_rps_send_ipi(remsd);
-
 	/* Process offline CPU's input_pkt_queue */
 	while ((skb = __skb_dequeue(&oldsd->process_queue))) {
 		netif_rx(skb);
@@ -11457,6 +11351,43 @@  static struct pernet_operations __net_initdata default_device_ops = {
  *
  */
 
+static DEFINE_PER_CPU(struct task_struct *, backlog_napi);
+
+static int backlog_napi_should_run(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	if (READ_ONCE(sd->defer_list))
+		return 1;
+
+	return test_bit(NAPI_STATE_SCHED, &napi->state);
+}
+
+static void run_backlog_napi(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+
+	napi_threaded_poll_loop(&sd->backlog);
+}
+
+static void backlog_napi_setup(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	napi->thread = this_cpu_read(backlog_napi);
+	set_bit(NAPI_STATE_THREADED, &napi->state);
+}
+
+static struct smp_hotplug_thread backlog_threads = {
+	.store                  = &backlog_napi,
+	.thread_should_run      = backlog_napi_should_run,
+	.thread_fn              = run_backlog_napi,
+	.thread_comm            = "backlog_napi/%u",
+	.setup			= backlog_napi_setup,
+};
+
 /*
  *       This is called single threaded during boot, so no need
  *       to take the rtnl semaphore.
@@ -11497,17 +11428,14 @@  static int __init net_dev_init(void)
 #endif
 		INIT_LIST_HEAD(&sd->poll_list);
 		sd->output_queue_tailp = &sd->output_queue;
-#ifdef CONFIG_RPS
-		INIT_CSD(&sd->csd, rps_trigger_softirq, sd);
-		sd->cpu = i;
-#endif
-		INIT_CSD(&sd->defer_csd, trigger_rx_softirq, sd);
 		spin_lock_init(&sd->defer_lock);
 
 		init_gro_hash(&sd->backlog);
 		sd->backlog.poll = process_backlog;
 		sd->backlog.weight = weight_p;
+		INIT_LIST_HEAD(&sd->backlog.poll_list);
 	}
+	smpboot_register_percpu_thread(&backlog_threads);
 
 	dev_boot_phase = 0;
 
diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c
index 09f7ed1a04e8a..086283cc8d47b 100644
--- a/net/core/net-procfs.c
+++ b/net/core/net-procfs.c
@@ -180,7 +180,7 @@  static int softnet_seq_show(struct seq_file *seq, void *v)
 		   sd->processed, sd->dropped, sd->time_squeeze, 0,
 		   0, 0, 0, 0, /* was fastroute */
 		   0,	/* was cpu_collision */
-		   sd->received_rps, flow_limit_count,
+		   0 /* was received_rps */, flow_limit_count,
 		   input_qlen + process_qlen, (int)seq->index,
 		   input_qlen, process_qlen);
 	return 0;
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 33fdf04d4334d..265a8aa6b3228 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -6802,7 +6802,7 @@  nodefer:	__kfree_skb(skb);
 	 * if we are unlucky enough (this seems very unlikely).
 	 */
 	if (unlikely(kick) && !cmpxchg(&sd->defer_ipi_scheduled, 0, 1))
-		smp_call_function_single_async(cpu, &sd->defer_csd);
+		__napi_schedule(&sd->backlog);
 }
 
 static void skb_splice_csum_page(struct sk_buff *skb, struct page *page,