From patchwork Wed Nov 30 16:56:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27896 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1040856wrr; Wed, 30 Nov 2022 09:03:13 -0800 (PST) X-Google-Smtp-Source: AA0mqf69OVzFYspeXy5bXwgVhEJ/8ha3bbIaiSROSgEdR6mI91CKZ5DlkJQ6D5xHBYS0tz9uBoOg X-Received: by 2002:a05:6402:28ac:b0:46a:b8d0:a052 with SMTP id eg44-20020a05640228ac00b0046ab8d0a052mr24295746edb.399.1669827793357; Wed, 30 Nov 2022 09:03:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827793; cv=none; d=google.com; s=arc-20160816; b=KTp3tWioQuUAirdCUiYrFsQyeN1RAsJJ7WpME3S1lwjWGVQxWdgWyyCtyiRX0IAkHR //e5Tx7IrZ6udH8euEXxfQLZkCgro7xlhuE/W2224gt+GTx99gdFFTAobAkqY+r9UoDK kebEOOIjZGlx4Dukv9YxryeWqn39bFj0lgXiSd57Ggzh/xWvcgv6P9qqvRCx/Pv0zpPq RTISX8km79k05biQ3SkFJq3KmBf+o1vK/rl6/PJGqb53jEX1OkCs4tEF3U0MQ/Fr6sNv YPsGgFi3j9tSwcNPWUVCWpDmhor1daSsfZCeMudW24eqN3aH7rPmpx3CVPJREVo34Xqw LtUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=f0bYOASdO/Li33R/J7KbVGZP6aCzO4ecxhhHKkbaLSU=; b=GgmE4diZ9F7CQdp7rqOOl0y2o5sI8ackQ2hJlcf15qAB6PujdORCezpPYPIL7LHNxZ QNLaY2lZGwZuvkQ4wp/W0y/PdMnBBawdEd5exq+Ctq0kxeQR63XinpwjpQUpLnA8WBUw GifJTxLhYRPvNdaMi25VeF8G2O3tFl7k41V1DFAUQJOi9gh7LggrJX0pIM/Y36mY+FQ3 2YMtPXlcA6h1E0m5NdFkKNIHBO+oL6xojx9u1smc9NCpcwS0AGgWqpkCigKjaf+DdPRM 4iCtpw+6DMVSRwk+pgFiJZ1LUHlzSbTnfyfW+xJHrsBnmD2dI+iA3p/nvoGfTswVn3Py JD1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GFkW06uJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gt13-20020a1709072d8d00b007b2a6aaff06si2055916ejc.50.2022.11.30.09.02.41; Wed, 30 Nov 2022 09:03:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GFkW06uJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230476AbiK3Q7h (ORCPT + 99 others); Wed, 30 Nov 2022 11:59:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229632AbiK3Q6x (ORCPT ); Wed, 30 Nov 2022 11:58:53 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53CDD8DFFD for ; Wed, 30 Nov 2022 08:57:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827423; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f0bYOASdO/Li33R/J7KbVGZP6aCzO4ecxhhHKkbaLSU=; b=GFkW06uJzJYDTSuoAplOOfu3vJ9sidRXiBiIC3Jw6L0kQ4zhtZuTM1LUhdNs96XoqjPJVY Tm2LjcPliv2WNxC3WQTEr9G/G6E40ocAqsnYQicjDKGioqqGoQtM2cXP1WfGS1W6SrPgMb ernrJtxtEDrHaoDw5adeVQmoQ+Xi2wc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-408-xEDSLDEyOtWlPzoXJ0NU-Q-1; Wed, 30 Nov 2022 11:56:58 -0500 X-MC-Unique: xEDSLDEyOtWlPzoXJ0NU-Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DF3EB8027F5; Wed, 30 Nov 2022 16:56:57 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2BBE540C6EC4; Wed, 30 Nov 2022 16:56:57 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 18/35] rxrpc: Create a per-local endpoint receive queue and I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:54 +0000 Message-ID: <166982741440.621383.4325041430555712070.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941348246986607?= X-GMAIL-MSGID: =?utf-8?q?1750941348246986607?= Create a per-local receive queue to which, in a future patch, all incoming packets will be directed and an I/O thread that will process those packets and perform all transmission of packets. Destruction of the local endpoint is also moved from the local processor work item (which will be absorbed) to the thread. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 10 +++++++++ net/rxrpc/io_thread.c | 51 +++++++++++++++++++++++++++++++++++++++++++++- net/rxrpc/local_object.c | 39 ++++++++++++++++++++--------------- net/rxrpc/proc.c | 12 ++++++++--- 4 files changed, 91 insertions(+), 21 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 523cc9c5ab12..de82c25956a6 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -110,6 +110,8 @@ struct rxrpc_net { atomic_t stat_rx_acks[256]; atomic_t stat_why_req_ack[8]; + + atomic_t stat_io_loop; }; /* @@ -280,12 +282,14 @@ struct rxrpc_local { struct hlist_node link; struct socket *socket; /* my UDP socket */ struct work_struct processor; + struct task_struct *io_thread; struct list_head ack_tx_queue; /* List of ACKs that need sending */ spinlock_t ack_tx_lock; /* ACK list lock */ struct rxrpc_sock __rcu *service; /* Service(s) listening on this endpoint */ struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */ struct sk_buff_head reject_queue; /* packets awaiting rejection */ struct sk_buff_head event_queue; /* endpoint event packets awaiting processing */ + struct sk_buff_head rx_queue; /* Received packets */ struct rb_root client_bundles; /* Client connection bundles by socket params */ spinlock_t client_bundles_lock; /* Lock for client_bundles */ spinlock_t lock; /* access lock */ @@ -954,6 +958,11 @@ void rxrpc_input_implicit_end_call(struct rxrpc_sock *, struct rxrpc_connection * io_thread.c */ int rxrpc_input_packet(struct sock *, struct sk_buff *); +int rxrpc_io_thread(void *data); +static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local) +{ + wake_up_process(local->io_thread); +} /* * insecure.c @@ -984,6 +993,7 @@ void rxrpc_put_local(struct rxrpc_local *, enum rxrpc_local_trace); struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *, enum rxrpc_local_trace); void rxrpc_unuse_local(struct rxrpc_local *, enum rxrpc_local_trace); void rxrpc_queue_local(struct rxrpc_local *); +void rxrpc_destroy_local(struct rxrpc_local *local); void rxrpc_destroy_all_locals(struct rxrpc_net *); static inline bool __rxrpc_unuse_local(struct rxrpc_local *local, diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index d2aaad5afa1d..0b3e096e3d50 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* RxRPC packet reception * - * Copyright (C) 2007, 2016 Red Hat, Inc. All Rights Reserved. + * Copyright (C) 2007, 2016, 2022 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) */ @@ -368,3 +368,52 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) _leave(" [badmsg]"); return 0; } + +/* + * I/O and event handling thread. + */ +int rxrpc_io_thread(void *data) +{ + struct sk_buff_head rx_queue; + struct rxrpc_local *local = data; + struct sk_buff *skb; + + skb_queue_head_init(&rx_queue); + + set_user_nice(current, MIN_NICE); + + for (;;) { + rxrpc_inc_stat(local->rxnet, stat_io_loop); + + /* Process received packets and errors. */ + if ((skb = __skb_dequeue(&rx_queue))) { + // TODO: Input packet + rxrpc_free_skb(skb, rxrpc_skb_put_input); + continue; + } + + if (!skb_queue_empty(&local->rx_queue)) { + spin_lock_irq(&local->rx_queue.lock); + skb_queue_splice_tail_init(&local->rx_queue, &rx_queue); + spin_unlock_irq(&local->rx_queue.lock); + continue; + } + + set_current_state(TASK_INTERRUPTIBLE); + if (!skb_queue_empty(&local->rx_queue)) { + __set_current_state(TASK_RUNNING); + continue; + } + + if (kthread_should_stop()) + break; + schedule(); + } + + __set_current_state(TASK_RUNNING); + rxrpc_see_local(local, rxrpc_local_stop); + rxrpc_destroy_local(local); + local->io_thread = NULL; + rxrpc_see_local(local, rxrpc_local_stopped); + return 0; +} diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 1617ce651b9b..7c61349984e3 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -103,6 +103,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, init_rwsem(&local->defrag_sem); skb_queue_head_init(&local->reject_queue); skb_queue_head_init(&local->event_queue); + skb_queue_head_init(&local->rx_queue); local->client_bundles = RB_ROOT; spin_lock_init(&local->client_bundles_lock); spin_lock_init(&local->lock); @@ -126,6 +127,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net) struct udp_tunnel_sock_cfg tuncfg = {NULL}; struct sockaddr_rxrpc *srx = &local->srx; struct udp_port_cfg udp_conf = {0}; + struct task_struct *io_thread; struct sock *usk; int ret; @@ -185,8 +187,23 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net) BUG(); } + io_thread = kthread_run(rxrpc_io_thread, local, + "krxrpcio/%u", ntohs(udp_conf.local_udp_port)); + if (IS_ERR(io_thread)) { + ret = PTR_ERR(io_thread); + goto error_sock; + } + + local->io_thread = io_thread; _leave(" = 0"); return 0; + +error_sock: + kernel_sock_shutdown(local->socket, SHUT_RDWR); + local->socket->sk->sk_user_data = NULL; + sock_release(local->socket); + local->socket = NULL; + return ret; } /* @@ -360,19 +377,8 @@ struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local, */ void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why) { - unsigned int debug_id; - int r, u; - - if (local) { - debug_id = local->debug_id; - r = refcount_read(&local->ref); - u = atomic_dec_return(&local->active_users); - trace_rxrpc_local(debug_id, why, r, u); - if (u == 0) { - rxrpc_get_local(local, rxrpc_local_get_queue); - rxrpc_queue_local(local); - } - } + if (local && __rxrpc_unuse_local(local, why)) + kthread_stop(local->io_thread); } /* @@ -382,7 +388,7 @@ void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why) * Closing the socket cannot be done from bottom half context or RCU callback * context because it might sleep. */ -static void rxrpc_local_destroyer(struct rxrpc_local *local) +void rxrpc_destroy_local(struct rxrpc_local *local) { struct socket *socket = local->socket; struct rxrpc_net *rxnet = local->rxnet; @@ -411,6 +417,7 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local) */ rxrpc_purge_queue(&local->reject_queue); rxrpc_purge_queue(&local->event_queue); + rxrpc_purge_queue(&local->rx_queue); } /* @@ -430,10 +437,8 @@ static void rxrpc_local_processor(struct work_struct *work) do { again = false; - if (!__rxrpc_use_local(local, rxrpc_local_use_work)) { - rxrpc_local_destroyer(local); + if (!__rxrpc_use_local(local, rxrpc_local_use_work)) break; - } if (!list_empty(&local->ack_tx_queue)) { rxrpc_transmit_ack_packets(local); diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index d3a6d24cf871..35d5b43c677e 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -342,7 +342,7 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v) if (v == SEQ_START_TOKEN) { seq_puts(seq, "Proto Local " - " Use Act\n"); + " Use Act RxQ\n"); return 0; } @@ -351,10 +351,11 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v) sprintf(lbuff, "%pISpc", &local->srx.transport); seq_printf(seq, - "UDP %-47.47s %3u %3u\n", + "UDP %-47.47s %3u %3u %3u\n", lbuff, refcount_read(&local->ref), - atomic_read(&local->active_users)); + atomic_read(&local->active_users), + local->rx_queue.qlen); return 0; } @@ -463,6 +464,9 @@ int rxrpc_stats_show(struct seq_file *seq, void *v) "Buffers : txb=%u rxb=%u\n", atomic_read(&rxrpc_nr_txbuf), atomic_read(&rxrpc_n_rx_skbs)); + seq_printf(seq, + "IO-thread: loops=%u\n", + atomic_read(&rxnet->stat_io_loop)); return 0; } @@ -492,5 +496,7 @@ int rxrpc_stats_clear(struct file *file, char *buf, size_t size) memset(&rxnet->stat_rx_acks, 0, sizeof(rxnet->stat_rx_acks)); memset(&rxnet->stat_why_req_ack, 0, sizeof(rxnet->stat_why_req_ack)); + + atomic_set(&rxnet->stat_io_loop, 0); return size; }