From patchwork Mon Aug 14 09:35:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 135302 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b824:0:b0:3f2:4152:657d with SMTP id z4csp2638747vqi; Mon, 14 Aug 2023 03:08:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHpMptcJu3K+LKd4W4POf6KLHht4j+eY0ldb+Lz4m2CVCaoweEoYcfKjhek9QZgoCBRfyp6 X-Received: by 2002:a2e:3518:0:b0:2b6:cca1:9760 with SMTP id z24-20020a2e3518000000b002b6cca19760mr6341109ljz.27.1692007736459; Mon, 14 Aug 2023 03:08:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692007736; cv=none; d=google.com; s=arc-20160816; b=AlkLw1eiRPPK1w2G4kECAB+JX+HAEVOp9hXNzUA4l9Ej2Zr8BFDwx/TViQZmabPyWM UmI7z2KmC0OCuHMbcIaFa/L/2aPbSG7GYc327/HTDJIj0BjABzjhvrV7Nx5e+dWf/wI3 8WUFHpLDqPy4bdq6OPq+mD7Lpr/qiTBOMmP2YdSq6Abh5y5UFo17DD1ZSfwxqhGE9K6Q p30Ck4ssyPIR7+IhFoIexU1j3Hrv03XcOzvkQJytMbuJlAD1AUVjkFDTWFslKKUltwF4 f2ON/qk+ONICHpqu3PRs8vRd2mRx1PyKC/ssZ4yqxZ4O77KE5EHIDCLYxt3p3n7yVzH5 fU9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:dkim-signature :dkim-signature:from; bh=YToMCb92pJGoOfpA0SlHlPlC7zvX3PytylQ9ib/oRb8=; fh=b6P7jmnyJRE7etpvwQEzryn2VauQYu0w7TDJztvp9+w=; b=t0/SKny/B5ZNWBui3WTeXmzFaiJ2JscmpfPzEnqnPKze3yC5gnlPHK4sun9fnj0A7H 6Txo6TXHRhpJv8F3RKo6zR4eu73yjAqMHqVqJZmTIvEKI/zFfkkC6MEBUv4HBP/anF7M 6fgKFP8sTKpNQ7zLYmryDmdDym5XekGqAk1ZuqDaSegj/IHqNEC0GNVwld4TtxosiTm6 28kXHmmpzqKN0qziUaTIsjVkLGRShmXOStg9QSaugzShJKmt3cCaOJIoShDIQWun6CLs RiG4HVQJIHKPxGcNP/bCyhYQKZi7pIDQ6D7NnH7naf1jO3tZZpxWOckkgX70/AoqCjR7 EaVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=hhON6qyL; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ca24-20020a170906a3d800b0099caa5368easi7334177ejb.462.2023.08.14.03.08.32; Mon, 14 Aug 2023 03:08:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=hhON6qyL; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234528AbjHNJgE (ORCPT + 99 others); Mon, 14 Aug 2023 05:36:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234912AbjHNJfk (ORCPT ); Mon, 14 Aug 2023 05:35:40 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D32D3E6E; Mon, 14 Aug 2023 02:35:38 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1692005737; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YToMCb92pJGoOfpA0SlHlPlC7zvX3PytylQ9ib/oRb8=; b=hhON6qyLxkzadx3OgDCdCEk3q/iU1uDdEBlnPoeSIne+SH0WhHKe3S0hiuT4+YP8wAIeBM ZAqwONUF9iKEaMmQVQDAopslgcmiUhzG7c0a3AsBV8s5jIGH9p61KgHjBgyywKrfyNpvKt seP6v7JhR2/7HRRr+ab8M/t+V0pEZcbfGw5KSyPNam4faXZpqzmEZG8bwip0RacQWWjw3e FsBykOZCqhY2GCZ0gvsgQYP7EAX6+zjwIwibkQucKyQ04J5Bz9rYRehxxbgJWjwX+2FJuc 5s+Q4WHieoxmvWlat384+GqFYwKjLDIR638xsiBxwZPR8CBuaasLs2jPjkldRQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1692005737; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YToMCb92pJGoOfpA0SlHlPlC7zvX3PytylQ9ib/oRb8=; b=N2kJrKFhZozdMBAGb0O42Y1xG3za2UccA4n8H+D7fyizNvjiBlS77M6fhglaE6AiQdMGNC Xoe5rPttJZB2iECQ== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Wander Lairson Costa , Sebastian Andrzej Siewior Subject: [RFC PATCH net-next 1/2] net: Use SMP threads for backlog NAPI. Date: Mon, 14 Aug 2023 11:35:27 +0200 Message-Id: <20230814093528.117342-2-bigeasy@linutronix.de> In-Reply-To: <20230814093528.117342-1-bigeasy@linutronix.de> References: <20230814093528.117342-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774198655281994785 X-GMAIL-MSGID: 1774198704459056587 Backlog NAPI is a per-CPU NAPI struct only (with no device behind it) used by drivers which don't do NAPI them self and RPS. The non-NAPI driver use the CPU local backlog NAPI. If RPS is enabled then a flow for the skb is computed and based on the flow the skb can be enqueued on a remote CPU. Scheduling/ raising the softirq (for backlog's NAPI) on the remote CPU isn't trivial because the softirq is only scheduled on the local CPU and performed after the hardirq is done. In order to schedule a softirq on the remote CPU, an IPI is sent to the remote CPU which schedules the backlog-NAPI on the then local CPU. On PREEMPT_RT interrupts are force-threaded. The soft interrupts are raised within the interrupt thread and processed after the interrupt handler completed still within the context of the interrupt thread. The softirq is handled in the context where it originated. With force-threaded interrupts enabled, ksoftirqd is woken up if a softirq is raised from hardirq context. This is the case if it is raised from an IPI. Additionally there is a warning on PREEMPT_RT if the softirq is raised from the idle thread. This was done for two reasons: - With threaded interrupts the processing should happen in thread context (where it originated) and ksoftirqd is the only thread for this context if raised from hardirq. Using the currently running task instead would "punish" a random task. - Once ksoftirqd is active it consumes all further softirqs until it stops running. This changed recently and is no longer the case. Instead of keeping the backlog NAPI in ksoftirqd (in force-threaded/ PREEMPT_RT setups) I am proposing NAPI-threads for backlog. The "proper" setup with threaded-NAPI is not doable because the threads are not pinned to an individual CPU and can be modified by the user. Additionally a dummy network device would have to be assigned. Also CPU-hotplug has to be considered if additional CPUs show up. All this can be probably done/ solved but the smpboot-threads already provide this infrastructure. Create NAPI-threads for backlog. The thread runs the inner loop from napi_threaded_poll(), the wait part is different. Since there are now per-CPU threads for backlog the remote IPI for signaling is not needed and can be removed. The NAPI for backlog can always be scheduled as it ends in waking the corresponding thread. Since "deferred skb free" use a similar IPI mechanism for signaling, it is also using the backlog threads. This makes NAPI threads mandatory for backlog and it can not be disabled. The other visibile part with RPS (or backlog usage in general) is that it becomes now visible in `top' while earlier it would remain unaccounted. Signed-off-by: Sebastian Andrzej Siewior --- include/linux/netdevice.h | 8 -- net/core/dev.c | 226 +++++++++++++------------------------- net/core/net-procfs.c | 2 +- net/core/skbuff.c | 2 +- 4 files changed, 79 insertions(+), 159 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 0896aaa91dd7b..17e31a68e725e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3190,9 +3190,6 @@ struct softnet_data { /* stats */ unsigned int processed; unsigned int time_squeeze; -#ifdef CONFIG_RPS - struct softnet_data *rps_ipi_list; -#endif bool in_net_rx_action; bool in_napi_threaded_poll; @@ -3221,12 +3218,8 @@ struct softnet_data { unsigned int input_queue_head ____cacheline_aligned_in_smp; /* Elements below can be accessed between CPUs for RPS/RFS */ - call_single_data_t csd ____cacheline_aligned_in_smp; - struct softnet_data *rps_ipi_next; - unsigned int cpu; unsigned int input_queue_tail; #endif - unsigned int received_rps; unsigned int dropped; struct sk_buff_head input_pkt_queue; struct napi_struct backlog; @@ -3236,7 +3229,6 @@ struct softnet_data { int defer_count; int defer_ipi_scheduled; struct sk_buff *defer_list; - call_single_data_t defer_csd; }; static inline void input_queue_head_incr(struct softnet_data *sd) diff --git a/net/core/dev.c b/net/core/dev.c index 636b41f0b32d6..40103238ac0a1 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -153,6 +153,7 @@ #include #include #include +#include #include "dev.h" #include "net-sysfs.h" @@ -4658,57 +4659,8 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, EXPORT_SYMBOL(rps_may_expire_flow); #endif /* CONFIG_RFS_ACCEL */ - -/* Called from hardirq (IPI) context */ -static void rps_trigger_softirq(void *data) -{ - struct softnet_data *sd = data; - - ____napi_schedule(sd, &sd->backlog); - sd->received_rps++; -} - #endif /* CONFIG_RPS */ -/* Called from hardirq (IPI) context */ -static void trigger_rx_softirq(void *data) -{ - struct softnet_data *sd = data; - - __raise_softirq_irqoff(NET_RX_SOFTIRQ); - smp_store_release(&sd->defer_ipi_scheduled, 0); -} - -/* - * After we queued a packet into sd->input_pkt_queue, - * we need to make sure this queue is serviced soon. - * - * - If this is another cpu queue, link it to our rps_ipi_list, - * and make sure we will process rps_ipi_list from net_rx_action(). - * - * - If this is our own queue, NAPI schedule our backlog. - * Note that this also raises NET_RX_SOFTIRQ. - */ -static void napi_schedule_rps(struct softnet_data *sd) -{ - struct softnet_data *mysd = this_cpu_ptr(&softnet_data); - -#ifdef CONFIG_RPS - if (sd != mysd) { - sd->rps_ipi_next = mysd->rps_ipi_list; - mysd->rps_ipi_list = sd; - - /* If not called from net_rx_action() or napi_threaded_poll() - * we have to raise NET_RX_SOFTIRQ. - */ - if (!mysd->in_net_rx_action && !mysd->in_napi_threaded_poll) - __raise_softirq_irqoff(NET_RX_SOFTIRQ); - return; - } -#endif /* CONFIG_RPS */ - __napi_schedule_irqoff(&mysd->backlog); -} - #ifdef CONFIG_NET_FLOW_LIMIT int netdev_flow_limit_table_len __read_mostly = (1 << 12); #endif @@ -4781,7 +4733,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu, * We can use non atomic operation since we own the queue lock */ if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state)) - napi_schedule_rps(sd); + __napi_schedule_irqoff(&sd->backlog); goto enqueue; } reason = SKB_DROP_REASON_CPU_BACKLOG; @@ -5896,63 +5848,12 @@ static void flush_all_backlogs(void) cpus_read_unlock(); } -static void net_rps_send_ipi(struct softnet_data *remsd) -{ -#ifdef CONFIG_RPS - while (remsd) { - struct softnet_data *next = remsd->rps_ipi_next; - - if (cpu_online(remsd->cpu)) - smp_call_function_single_async(remsd->cpu, &remsd->csd); - remsd = next; - } -#endif -} - -/* - * net_rps_action_and_irq_enable sends any pending IPI's for rps. - * Note: called with local irq disabled, but exits with local irq enabled. - */ -static void net_rps_action_and_irq_enable(struct softnet_data *sd) -{ -#ifdef CONFIG_RPS - struct softnet_data *remsd = sd->rps_ipi_list; - - if (remsd) { - sd->rps_ipi_list = NULL; - - local_irq_enable(); - - /* Send pending IPI's to kick RPS processing on remote cpus. */ - net_rps_send_ipi(remsd); - } else -#endif - local_irq_enable(); -} - -static bool sd_has_rps_ipi_waiting(struct softnet_data *sd) -{ -#ifdef CONFIG_RPS - return sd->rps_ipi_list != NULL; -#else - return false; -#endif -} - static int process_backlog(struct napi_struct *napi, int quota) { struct softnet_data *sd = container_of(napi, struct softnet_data, backlog); bool again = true; int work = 0; - /* Check if we have pending ipi, its better to send them now, - * not waiting net_rx_action() end. - */ - if (sd_has_rps_ipi_waiting(sd)) { - local_irq_disable(); - net_rps_action_and_irq_enable(sd); - } - napi->weight = READ_ONCE(dev_rx_weight); while (again) { struct sk_buff *skb; @@ -5977,7 +5878,7 @@ static int process_backlog(struct napi_struct *napi, int quota) * We can use a plain write instead of clear_bit(), * and we dont need an smp_mb() memory barrier. */ - napi->state = 0; + napi->state = BIT(NAPI_STATE_THREADED); again = false; } else { skb_queue_splice_tail_init(&sd->input_pkt_queue, @@ -6634,6 +6535,8 @@ static void skb_defer_free_flush(struct softnet_data *sd) if (!READ_ONCE(sd->defer_list)) return; + smp_store_release(&sd->defer_ipi_scheduled, 0); + spin_lock(&sd->defer_lock); skb = sd->defer_list; sd->defer_list = NULL; @@ -6647,39 +6550,42 @@ static void skb_defer_free_flush(struct softnet_data *sd) } } +static void napi_threaded_poll_loop(struct napi_struct *napi) +{ + struct softnet_data *sd; + + for (;;) { + bool repoll = false; + void *have; + + local_bh_disable(); + sd = this_cpu_ptr(&softnet_data); + sd->in_napi_threaded_poll = true; + + have = netpoll_poll_lock(napi); + __napi_poll(napi, &repoll); + netpoll_poll_unlock(have); + + sd->in_napi_threaded_poll = false; + barrier(); + + skb_defer_free_flush(sd); + local_bh_enable(); + + if (!repoll) + break; + + cond_resched(); + } +} + static int napi_threaded_poll(void *data) { struct napi_struct *napi = data; - struct softnet_data *sd; - void *have; while (!napi_thread_wait(napi)) { - for (;;) { - bool repoll = false; - local_bh_disable(); - sd = this_cpu_ptr(&softnet_data); - sd->in_napi_threaded_poll = true; - - have = netpoll_poll_lock(napi); - __napi_poll(napi, &repoll); - netpoll_poll_unlock(have); - - sd->in_napi_threaded_poll = false; - barrier(); - - if (sd_has_rps_ipi_waiting(sd)) { - local_irq_disable(); - net_rps_action_and_irq_enable(sd); - } - skb_defer_free_flush(sd); - local_bh_enable(); - - if (!repoll) - break; - - cond_resched(); - } + napi_threaded_poll_loop(napi); } return 0; } @@ -6714,8 +6620,6 @@ static __latent_entropy void net_rx_action(struct softirq_action *h) */ if (!list_empty(&sd->poll_list)) goto start; - if (!sd_has_rps_ipi_waiting(sd)) - goto end; } break; } @@ -6744,8 +6648,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h) else sd->in_net_rx_action = false; - net_rps_action_and_irq_enable(sd); -end:; + local_irq_enable(); } struct netdev_adjacent { @@ -11157,7 +11060,7 @@ static int dev_cpu_dead(unsigned int oldcpu) struct sk_buff **list_skb; struct sk_buff *skb; unsigned int cpu; - struct softnet_data *sd, *oldsd, *remsd = NULL; + struct softnet_data *sd, *oldsd; local_irq_disable(); cpu = smp_processor_id(); @@ -11189,22 +11092,13 @@ static int dev_cpu_dead(unsigned int oldcpu) poll_list); list_del_init(&napi->poll_list); - if (napi->poll == process_backlog) - napi->state = 0; - else + if (!WARN_ON(napi->poll == process_backlog)) ____napi_schedule(sd, napi); } raise_softirq_irqoff(NET_TX_SOFTIRQ); local_irq_enable(); -#ifdef CONFIG_RPS - remsd = oldsd->rps_ipi_list; - oldsd->rps_ipi_list = NULL; -#endif - /* send out pending IPI's on offline CPU */ - net_rps_send_ipi(remsd); - /* Process offline CPU's input_pkt_queue */ while ((skb = __skb_dequeue(&oldsd->process_queue))) { netif_rx(skb); @@ -11457,6 +11351,43 @@ static struct pernet_operations __net_initdata default_device_ops = { * */ +static DEFINE_PER_CPU(struct task_struct *, backlog_napi); + +static int backlog_napi_should_run(unsigned int cpu) +{ + struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu); + struct napi_struct *napi = &sd->backlog; + + if (READ_ONCE(sd->defer_list)) + return 1; + + return test_bit(NAPI_STATE_SCHED, &napi->state); +} + +static void run_backlog_napi(unsigned int cpu) +{ + struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu); + + napi_threaded_poll_loop(&sd->backlog); +} + +static void backlog_napi_setup(unsigned int cpu) +{ + struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu); + struct napi_struct *napi = &sd->backlog; + + napi->thread = this_cpu_read(backlog_napi); + set_bit(NAPI_STATE_THREADED, &napi->state); +} + +static struct smp_hotplug_thread backlog_threads = { + .store = &backlog_napi, + .thread_should_run = backlog_napi_should_run, + .thread_fn = run_backlog_napi, + .thread_comm = "backlog_napi/%u", + .setup = backlog_napi_setup, +}; + /* * This is called single threaded during boot, so no need * to take the rtnl semaphore. @@ -11497,17 +11428,14 @@ static int __init net_dev_init(void) #endif INIT_LIST_HEAD(&sd->poll_list); sd->output_queue_tailp = &sd->output_queue; -#ifdef CONFIG_RPS - INIT_CSD(&sd->csd, rps_trigger_softirq, sd); - sd->cpu = i; -#endif - INIT_CSD(&sd->defer_csd, trigger_rx_softirq, sd); spin_lock_init(&sd->defer_lock); init_gro_hash(&sd->backlog); sd->backlog.poll = process_backlog; sd->backlog.weight = weight_p; + INIT_LIST_HEAD(&sd->backlog.poll_list); } + smpboot_register_percpu_thread(&backlog_threads); dev_boot_phase = 0; diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c index 09f7ed1a04e8a..086283cc8d47b 100644 --- a/net/core/net-procfs.c +++ b/net/core/net-procfs.c @@ -180,7 +180,7 @@ static int softnet_seq_show(struct seq_file *seq, void *v) sd->processed, sd->dropped, sd->time_squeeze, 0, 0, 0, 0, 0, /* was fastroute */ 0, /* was cpu_collision */ - sd->received_rps, flow_limit_count, + 0 /* was received_rps */, flow_limit_count, input_qlen + process_qlen, (int)seq->index, input_qlen, process_qlen); return 0; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 33fdf04d4334d..265a8aa6b3228 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6802,7 +6802,7 @@ nodefer: __kfree_skb(skb); * if we are unlucky enough (this seems very unlikely). */ if (unlikely(kick) && !cmpxchg(&sd->defer_ipi_scheduled, 0, 1)) - smp_call_function_single_async(cpu, &sd->defer_csd); + __napi_schedule(&sd->backlog); } static void skb_splice_csum_page(struct sk_buff *skb, struct page *page, From patchwork Mon Aug 14 09:35:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 135329 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b824:0:b0:3f2:4152:657d with SMTP id z4csp2653292vqi; Mon, 14 Aug 2023 03:42:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEEHe5mDbReC70/mfUzm11yb/Mm5UgkmFuXlkC/2iFvace9eN86pVp+FbQfznbfBlau/AD4 X-Received: by 2002:a17:907:7604:b0:99d:b9d6:6014 with SMTP id jx4-20020a170907760400b0099db9d66014mr656070ejc.4.1692009728908; Mon, 14 Aug 2023 03:42:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692009728; cv=none; d=google.com; s=arc-20160816; b=trLTvTggF7CfEb/KfLBFeZTDsstTyrZP90FsKHhAmrI12/nFU4Hp8cZnyu3M2NvylA rK18Y+IJzPd2YsaoIJrWS4Rd709eR67ZVkjb9fgpRJSki0+3SFvwUeLtRLXlBTsQow0j aKh9MHaJcgT0+wKSGDj+IiQDS6bpvK0uSkqE04JaZR977RoqknHhaOi/Cv1/9DvRk/Yk Ou5A2ap+eu2gGRqZtS/mB6H7bIyj7AM7L6xRiv8Rp43YMbneknT9k0FJedHJsMp7GqCH HCVJQ16wMvD5MQpmPtB44Wde87sJYyEf2H8JFzxC6xYAnm+qj6vXwEa1Z3VrU1N/E+3R Xuvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:dkim-signature :dkim-signature:from; bh=rNjgejAz/yJ5TpCvyso2wcCPwKOAf1wNRQTPDKcGTfA=; fh=b6P7jmnyJRE7etpvwQEzryn2VauQYu0w7TDJztvp9+w=; b=l76U1BrVSPwqS7r3cP/JQ1J4imXxvdjgCqCC7R1UcrJdddvM7D9jc4nr9PF+7BZ8GH eTIH8GybCxvA43EWXHrHUdEHsM88TD0kC8sVvqvR2ycq54UmLVq0szcxEQaGx9Uhk1Lt Qjqo2PWxzQBiQ7/jLDcz6HQ0XjIzD9UNqDFJMMuXdAlsgY+5WdNhRRKcPgvBii8VIsTE KaiNi36yy3P24mLEVeJ8PY8H4LUmMC6qHcl44odhwsasIIoHSKjTPKN38TNjPO1IJ9uh BWY0dKM5F3BiFtOyfR85k2GvvXsU4ghaWJkJVL9CIyS4NImWhMv5kreBUQVZOBasW8Jp 7AHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=DAGBEmRY; dkim=neutral (no key) header.i=@linutronix.de header.b=iJRYxRen; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n7-20020a1709065e0700b0099bd602097bsi6793883eju.544.2023.08.14.03.41.44; Mon, 14 Aug 2023 03:42:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=DAGBEmRY; dkim=neutral (no key) header.i=@linutronix.de header.b=iJRYxRen; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235492AbjHNJgI (ORCPT + 99 others); Mon, 14 Aug 2023 05:36:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235176AbjHNJfl (ORCPT ); Mon, 14 Aug 2023 05:35:41 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 288EBE71; Mon, 14 Aug 2023 02:35:39 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1692005737; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rNjgejAz/yJ5TpCvyso2wcCPwKOAf1wNRQTPDKcGTfA=; b=DAGBEmRYsVr53rxZUrr9kH7AFigcg2gOp1zzKrV3ze71DuSTWxRx9Edidiw+mB8x6MfiRj ypeXlw8sI6TCh6pzQSFs7N8k1MX14Bs7CTa+RdlYBNC+7k83G1KL0k3jrDnH/C03WG3p/7 lkfB2MgyB6E/+WhByghfLnMKDAzsans5FqnghJemuCCrsJEN9qulipvZ3coTLxjPyhF2iG bG1LXeAv4GEuXIarePi9tbMUpc9b2QawhDDi16KjXyoRDplutupiyr4g+SDT9Gz3CgRbxy C2GLvpX+tG6kwcoULZFHWCMFloLdELJGKcEOLgBKiXNk0sxEBX/Sr3MZJuq+fQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1692005737; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rNjgejAz/yJ5TpCvyso2wcCPwKOAf1wNRQTPDKcGTfA=; b=iJRYxRenhBP9/aTigZHWor0jgNnR7DX5DO9P42v/YTsZgWP0YT03Sv9qhpognsKrRV5IQt f89Dnk+P4h/17WDw== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Wander Lairson Costa , Sebastian Andrzej Siewior Subject: [RFC PATCH 2/2] softirq: Drop the warning from do_softirq_post_smp_call_flush(). Date: Mon, 14 Aug 2023 11:35:28 +0200 Message-Id: <20230814093528.117342-3-bigeasy@linutronix.de> In-Reply-To: <20230814093528.117342-1-bigeasy@linutronix.de> References: <20230814093528.117342-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774200793259693700 X-GMAIL-MSGID: 1774200793259693700 Once ksoftirqd become active, all softirqs which were raised, would not be processed immediately but delayed to ksoftirqd. On PREEMPT_RT this means softirqs, which were raised in a threaded interrupt (at elevated process priority), would not be served after the interrupt handler completed its work but will wait until ksoftirqd (normal priority) becomes running on the CPU. On a busy system with plenty of RT tasks this could be delayed for quite some time and leads to problems in general. This is an undesired situation and it has been attempted to avoid the situation in which ksoftirqd becomes scheduled. This changed since commit d15121be74856 ("Revert "softirq: Let ksoftirqd do its job"") and now a threaded interrupt handler will handle soft interrupts at its end even if ksoftirqd is pending. That means that they will be processed in the context in which they were raised. Unfortunately also all other soft interrupts which were raised (or enqueued) earlier and are not yet handled. This happens if a thread with higher priority is raised and has to catch up. This isn't a new problem and the new high priority thread will PI-boost the current sofitrq owner or start from scratch if ksoftirqd wasn't running yet. Since pending ksoftirqd no longer blocks other interrupt threads from handling soft interrupts I belive the warning can be disabled. The pending softirq work has to be solved differently. Remove the warning and update the comment. Signed-off-by: Sebastian Andrzej Siewior --- include/linux/interrupt.h | 4 ++-- kernel/smp.c | 4 +--- kernel/softirq.c | 12 +++++------- 3 files changed, 8 insertions(+), 12 deletions(-) diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index a92bce40b04b3..5143ae0ea9356 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -590,9 +590,9 @@ asmlinkage void do_softirq(void); asmlinkage void __do_softirq(void); #ifdef CONFIG_PREEMPT_RT -extern void do_softirq_post_smp_call_flush(unsigned int was_pending); +extern void do_softirq_post_smp_call_flush(void); #else -static inline void do_softirq_post_smp_call_flush(unsigned int unused) +static inline void do_softirq_post_smp_call_flush(void) { do_softirq(); } diff --git a/kernel/smp.c b/kernel/smp.c index 385179dae360e..cd7db5ffe95ab 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -554,7 +554,6 @@ static void __flush_smp_call_function_queue(bool warn_cpu_offline) */ void flush_smp_call_function_queue(void) { - unsigned int was_pending; unsigned long flags; if (llist_empty(this_cpu_ptr(&call_single_queue))) @@ -562,10 +561,9 @@ void flush_smp_call_function_queue(void) local_irq_save(flags); /* Get the already pending soft interrupts for RT enabled kernels */ - was_pending = local_softirq_pending(); __flush_smp_call_function_queue(true); if (local_softirq_pending()) - do_softirq_post_smp_call_flush(was_pending); + do_softirq_post_smp_call_flush(); local_irq_restore(flags); } diff --git a/kernel/softirq.c b/kernel/softirq.c index 807b34ccd7973..aa299cb3ff47b 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -281,15 +281,13 @@ static inline void invoke_softirq(void) /* * flush_smp_call_function_queue() can raise a soft interrupt in a function - * call. On RT kernels this is undesired and the only known functionality - * in the block layer which does this is disabled on RT. If soft interrupts - * get raised which haven't been raised before the flush, warn so it can be - * investigated. + * call. On RT kernels this is undesired because the work is no longer processed + * in the context where it originated. It is not especially harmfull but best to + * be avoided. */ -void do_softirq_post_smp_call_flush(unsigned int was_pending) +void do_softirq_post_smp_call_flush(void) { - if (WARN_ON_ONCE(was_pending != local_softirq_pending())) - invoke_softirq(); + invoke_softirq(); } #else /* CONFIG_PREEMPT_RT */