From patchwork Wed Nov 30 16:54:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27877 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038315wrr; Wed, 30 Nov 2022 08:59:22 -0800 (PST) X-Google-Smtp-Source: AA0mqf7DzU10MFkIZM2hwiEb6h5bGnqIYYF61lePCzuwUU1d0lUqWalpEcB/lgsfAi+yScbLSdc2 X-Received: by 2002:a17:902:f792:b0:186:b32c:4cdc with SMTP id q18-20020a170902f79200b00186b32c4cdcmr43307152pln.166.1669827562109; Wed, 30 Nov 2022 08:59:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827562; cv=none; d=google.com; s=arc-20160816; b=VJibxu+RkIhHVerV8get678I+4Ojm/0dvcKKasu17CL1XgW03rgvMSAZMURrZxIGFV ZJ3iaqXwg1dmhAYvcZqqUOm+DHu8alf8L//kdLtOZwQWNbDUQo5YDVnY0C6f5dQ7Folm tohwJE+/TI/qwwrJyBaLNrvRWy8nU/c7gsd6JVEHZtsyxro1JjGjCQY2tKCIt+iDiqIj 2hXkNqaVr1Kwyh8MoiVliKUDP0wS9t+BOvahVOJKNqhI73E5cmpym+2fGTR5MkrZYVFy EylBZJicynfxULINteARhEuELff+h3ZahFZVWMUlrOAOq77Veaa+YpZhS+s1ae8fWkcJ OiGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=NsICjhS6DG6kKg5RohWKW4WHT3hznS63zas0NRgDh0w=; b=QKGOtgX2Y8OscYLrumXR1yDS2jj4cwJeV+A7tLQnzDJTIIsN1PjVYJQBeD38GWwx1m HRG3UJXTDFuDWR6OEinftlSvzJKXB6oWWuzlBtIors6Snz5/5f7zpAr/M5jEUIV92Bex EIHIqvtOu7uK56U377IDSx0qNSzN3HK8p4XJNKUvVXO5d8Tg+qFec7EkL9wdt5+A/ykn PHU3MTFD0WQkMe2In1tKeKZwbfmMZvLe8fioUygOqJBmebJ/ckyZUoUmmNIu7fnZy9lm faGcsvzalQgS00Rk4FU8ZHk/IKtCsi9HJRPZeJNNS2i5Vb4tBFN52X4fyyLJ6NTPvpeE VTrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=IRW7L6ww; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id pj2-20020a17090b4f4200b0021406fde039si1990118pjb.156.2022.11.30.08.59.08; Wed, 30 Nov 2022 08:59:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=IRW7L6ww; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230358AbiK3Qzl (ORCPT + 99 others); Wed, 30 Nov 2022 11:55:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230364AbiK3Qz3 (ORCPT ); Wed, 30 Nov 2022 11:55:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D17D452156 for ; Wed, 30 Nov 2022 08:54:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827273; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NsICjhS6DG6kKg5RohWKW4WHT3hznS63zas0NRgDh0w=; b=IRW7L6wwIWc/SI71PU9UZHuf4gwJMtVHFORbJBjoNxVwXKNBSRvmbD9GmVk07wfEK8ZdDY sm7Ie2NtmVuTFQVumkS7qkZDFYu3YpyEfUyKSdX8YH9cmflh2tXUWXRuodA+QOv8Zc7kvt cRAsAPtYmfzEUjTe9a86e2hDFYj+NG8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-304-EnDnCH-rNAyjX0bkNJNM5g-1; Wed, 30 Nov 2022 11:54:29 -0500 X-MC-Unique: EnDnCH-rNAyjX0bkNJNM5g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 75E16185A79C; Wed, 30 Nov 2022 16:54:29 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9B1B940C6EC4; Wed, 30 Nov 2022 16:54:28 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 01/35] rxrpc: Implement an in-kernel rxperf server for testing purposes From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:54:26 +0000 Message-ID: <166982726601.621383.15475080589217572083.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941105975862645?= X-GMAIL-MSGID: =?utf-8?q?1750941105975862645?= Implement an in-kernel rxperf server to allow kernel-based rxrpc services to be tested directly, unlike with AFS where they're accessed by the fileserver when the latter decides it wants to. This is implemented as a module that, if loaded, opens UDP port 7009 (afs3-rmtsys) and listens on it for incoming calls. Calls can be generated using the rxperf command shipped with OpenAFS, for example. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/net/af_rxrpc.h | 1 net/rxrpc/Kconfig | 7 + net/rxrpc/Makefile | 3 net/rxrpc/rxperf.c | 619 ++++++++++++++++++++++++++++++++++++++++++++++++ net/rxrpc/server_key.c | 25 ++ 5 files changed, 655 insertions(+) create mode 100644 net/rxrpc/rxperf.c diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h index b69ca695935c..dc033f08191e 100644 --- a/include/net/af_rxrpc.h +++ b/include/net/af_rxrpc.h @@ -71,5 +71,6 @@ void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *, unsigned long); int rxrpc_sock_set_min_security_level(struct sock *sk, unsigned int val); +int rxrpc_sock_set_security_keyring(struct sock *, struct key *); #endif /* _NET_RXRPC_H */ diff --git a/net/rxrpc/Kconfig b/net/rxrpc/Kconfig index accd35c05577..7ae023b37a83 100644 --- a/net/rxrpc/Kconfig +++ b/net/rxrpc/Kconfig @@ -58,4 +58,11 @@ config RXKAD See Documentation/networking/rxrpc.rst. +config RXPERF + tristate "RxRPC test service" + help + Provide an rxperf service tester. This listens on UDP port 7009 for + incoming calls from the rxperf program (an example of which can be + found in OpenAFS). + endif diff --git a/net/rxrpc/Makefile b/net/rxrpc/Makefile index fdeba488fc6e..79687477d93c 100644 --- a/net/rxrpc/Makefile +++ b/net/rxrpc/Makefile @@ -36,3 +36,6 @@ rxrpc-y := \ rxrpc-$(CONFIG_PROC_FS) += proc.o rxrpc-$(CONFIG_RXKAD) += rxkad.o rxrpc-$(CONFIG_SYSCTL) += sysctl.o + + +obj-$(CONFIG_RXPERF) += rxperf.o diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c new file mode 100644 index 000000000000..277ba18f575d --- /dev/null +++ b/net/rxrpc/rxperf.c @@ -0,0 +1,619 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* In-kernel rxperf server for testing purposes. + * + * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#define pr_fmt(fmt) "rxperf: " fmt +#include +#include +#include +#include + +MODULE_DESCRIPTION("rxperf test server (afs)"); +MODULE_AUTHOR("Red Hat, Inc."); +MODULE_LICENSE("GPL"); + +#define RXPERF_PORT 7009 +#define RX_PERF_SERVICE 147 +#define RX_PERF_VERSION 3 +#define RX_PERF_SEND 0 +#define RX_PERF_RECV 1 +#define RX_PERF_RPC 3 +#define RX_PERF_FILE 4 +#define RX_PERF_MAGIC_COOKIE 0x4711 + +struct rxperf_proto_params { + __be32 version; + __be32 type; + __be32 rsize; + __be32 wsize; +} __packed; + +static const u8 rxperf_magic_cookie[] = { 0x00, 0x00, 0x47, 0x11 }; +static const u8 secret[8] = { 0xa7, 0x83, 0x8a, 0xcb, 0xc7, 0x83, 0xec, 0x94 }; + +enum rxperf_call_state { + RXPERF_CALL_SV_AWAIT_PARAMS, /* Server: Awaiting parameter block */ + RXPERF_CALL_SV_AWAIT_REQUEST, /* Server: Awaiting request data */ + RXPERF_CALL_SV_REPLYING, /* Server: Replying */ + RXPERF_CALL_SV_AWAIT_ACK, /* Server: Awaiting final ACK */ + RXPERF_CALL_COMPLETE, /* Completed or failed */ +}; + +struct rxperf_call { + struct rxrpc_call *rxcall; + struct iov_iter iter; + struct kvec kvec[1]; + struct work_struct work; + const char *type; + size_t iov_len; + size_t req_len; /* Size of request blob */ + size_t reply_len; /* Size of reply blob */ + unsigned int debug_id; + unsigned int operation_id; + struct rxperf_proto_params params; + __be32 tmp[2]; + s32 abort_code; + enum rxperf_call_state state; + short error; + unsigned short unmarshal; + u16 service_id; + int (*deliver)(struct rxperf_call *call); + void (*processor)(struct work_struct *work); +}; + +static struct socket *rxperf_socket; +static struct key *rxperf_sec_keyring; /* Ring of security/crypto keys */ +static struct workqueue_struct *rxperf_workqueue; + +static void rxperf_deliver_to_call(struct work_struct *work); +static int rxperf_deliver_param_block(struct rxperf_call *call); +static int rxperf_deliver_request(struct rxperf_call *call); +static int rxperf_process_call(struct rxperf_call *call); +static void rxperf_charge_preallocation(struct work_struct *work); + +static DECLARE_WORK(rxperf_charge_preallocation_work, + rxperf_charge_preallocation); + +static inline void rxperf_set_call_state(struct rxperf_call *call, + enum rxperf_call_state to) +{ + call->state = to; +} + +static inline void rxperf_set_call_complete(struct rxperf_call *call, + int error, s32 remote_abort) +{ + if (call->state != RXPERF_CALL_COMPLETE) { + call->abort_code = remote_abort; + call->error = error; + call->state = RXPERF_CALL_COMPLETE; + } +} + +static void rxperf_rx_discard_new_call(struct rxrpc_call *rxcall, + unsigned long user_call_ID) +{ + kfree((struct rxperf_call *)user_call_ID); +} + +static void rxperf_rx_new_call(struct sock *sk, struct rxrpc_call *rxcall, + unsigned long user_call_ID) +{ + queue_work(rxperf_workqueue, &rxperf_charge_preallocation_work); +} + +static void rxperf_queue_call_work(struct rxperf_call *call) +{ + queue_work(rxperf_workqueue, &call->work); +} + +static void rxperf_notify_rx(struct sock *sk, struct rxrpc_call *rxcall, + unsigned long call_user_ID) +{ + struct rxperf_call *call = (struct rxperf_call *)call_user_ID; + + if (call->state != RXPERF_CALL_COMPLETE) + rxperf_queue_call_work(call); +} + +static void rxperf_rx_attach(struct rxrpc_call *rxcall, unsigned long user_call_ID) +{ + struct rxperf_call *call = (struct rxperf_call *)user_call_ID; + + call->rxcall = rxcall; +} + +static void rxperf_notify_end_reply_tx(struct sock *sock, + struct rxrpc_call *rxcall, + unsigned long call_user_ID) +{ + rxperf_set_call_state((struct rxperf_call *)call_user_ID, + RXPERF_CALL_SV_AWAIT_ACK); +} + +/* + * Charge the incoming call preallocation. + */ +static void rxperf_charge_preallocation(struct work_struct *work) +{ + struct rxperf_call *call; + + for (;;) { + call = kzalloc(sizeof(*call), GFP_KERNEL); + if (!call) + break; + + call->type = "unset"; + call->debug_id = atomic_inc_return(&rxrpc_debug_id); + call->deliver = rxperf_deliver_param_block; + call->state = RXPERF_CALL_SV_AWAIT_PARAMS; + call->service_id = RX_PERF_SERVICE; + call->iov_len = sizeof(call->params); + call->kvec[0].iov_len = sizeof(call->params); + call->kvec[0].iov_base = &call->params; + iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len); + INIT_WORK(&call->work, rxperf_deliver_to_call); + + if (rxrpc_kernel_charge_accept(rxperf_socket, + rxperf_notify_rx, + rxperf_rx_attach, + (unsigned long)call, + GFP_KERNEL, + call->debug_id) < 0) + break; + call = NULL; + } + + kfree(call); +} + +/* + * Open an rxrpc socket and bind it to be a server for callback notifications + * - the socket is left in blocking mode and non-blocking ops use MSG_DONTWAIT + */ +static int rxperf_open_socket(void) +{ + struct sockaddr_rxrpc srx; + struct socket *socket; + int ret; + + ret = sock_create_kern(&init_net, AF_RXRPC, SOCK_DGRAM, PF_INET6, + &socket); + if (ret < 0) + goto error_1; + + socket->sk->sk_allocation = GFP_NOFS; + + /* bind the callback manager's address to make this a server socket */ + memset(&srx, 0, sizeof(srx)); + srx.srx_family = AF_RXRPC; + srx.srx_service = RX_PERF_SERVICE; + srx.transport_type = SOCK_DGRAM; + srx.transport_len = sizeof(srx.transport.sin6); + srx.transport.sin6.sin6_family = AF_INET6; + srx.transport.sin6.sin6_port = htons(RXPERF_PORT); + + ret = rxrpc_sock_set_min_security_level(socket->sk, + RXRPC_SECURITY_ENCRYPT); + if (ret < 0) + goto error_2; + + ret = rxrpc_sock_set_security_keyring(socket->sk, rxperf_sec_keyring); + + ret = kernel_bind(socket, (struct sockaddr *)&srx, sizeof(srx)); + if (ret < 0) + goto error_2; + + rxrpc_kernel_new_call_notification(socket, rxperf_rx_new_call, + rxperf_rx_discard_new_call); + + ret = kernel_listen(socket, INT_MAX); + if (ret < 0) + goto error_2; + + rxperf_socket = socket; + rxperf_charge_preallocation(&rxperf_charge_preallocation_work); + return 0; + +error_2: + sock_release(socket); +error_1: + pr_err("Can't set up rxperf socket: %d\n", ret); + return ret; +} + +/* + * close the rxrpc socket rxperf was using + */ +static void rxperf_close_socket(void) +{ + kernel_listen(rxperf_socket, 0); + kernel_sock_shutdown(rxperf_socket, SHUT_RDWR); + flush_workqueue(rxperf_workqueue); + sock_release(rxperf_socket); +} + +/* + * Log remote abort codes that indicate that we have a protocol disagreement + * with the server. + */ +static void rxperf_log_error(struct rxperf_call *call, s32 remote_abort) +{ + static int max = 0; + const char *msg; + int m; + + switch (remote_abort) { + case RX_EOF: msg = "unexpected EOF"; break; + case RXGEN_CC_MARSHAL: msg = "client marshalling"; break; + case RXGEN_CC_UNMARSHAL: msg = "client unmarshalling"; break; + case RXGEN_SS_MARSHAL: msg = "server marshalling"; break; + case RXGEN_SS_UNMARSHAL: msg = "server unmarshalling"; break; + case RXGEN_DECODE: msg = "opcode decode"; break; + case RXGEN_SS_XDRFREE: msg = "server XDR cleanup"; break; + case RXGEN_CC_XDRFREE: msg = "client XDR cleanup"; break; + case -32: msg = "insufficient data"; break; + default: + return; + } + + m = max; + if (m < 3) { + max = m + 1; + pr_info("Peer reported %s failure on %s\n", msg, call->type); + } +} + +/* + * deliver messages to a call + */ +static void rxperf_deliver_to_call(struct work_struct *work) +{ + struct rxperf_call *call = container_of(work, struct rxperf_call, work); + enum rxperf_call_state state; + u32 abort_code, remote_abort = 0; + int ret; + + if (call->state == RXPERF_CALL_COMPLETE) + return; + + while (state = call->state, + state == RXPERF_CALL_SV_AWAIT_PARAMS || + state == RXPERF_CALL_SV_AWAIT_REQUEST || + state == RXPERF_CALL_SV_AWAIT_ACK + ) { + if (state == RXPERF_CALL_SV_AWAIT_ACK) { + if (!rxrpc_kernel_check_life(rxperf_socket, call->rxcall)) + goto call_complete; + return; + } + + ret = call->deliver(call); + if (ret == 0) + ret = rxperf_process_call(call); + + switch (ret) { + case 0: + continue; + case -EINPROGRESS: + case -EAGAIN: + return; + case -ECONNABORTED: + rxperf_log_error(call, call->abort_code); + goto call_complete; + case -EOPNOTSUPP: + abort_code = RXGEN_OPCODE; + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, + abort_code, ret, "GOP"); + goto call_complete; + case -ENOTSUPP: + abort_code = RX_USER_ABORT; + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, + abort_code, ret, "GUA"); + goto call_complete; + case -EIO: + pr_err("Call %u in bad state %u\n", + call->debug_id, call->state); + fallthrough; + case -ENODATA: + case -EBADMSG: + case -EMSGSIZE: + case -ENOMEM: + case -EFAULT: + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, + RXGEN_SS_UNMARSHAL, ret, "GUM"); + goto call_complete; + default: + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, + RX_CALL_DEAD, ret, "GER"); + goto call_complete; + } + } + +call_complete: + rxperf_set_call_complete(call, ret, remote_abort); + /* The call may have been requeued */ + rxrpc_kernel_end_call(rxperf_socket, call->rxcall); + cancel_work(&call->work); + kfree(call); +} + +/* + * Extract a piece of data from the received data socket buffers. + */ +static int rxperf_extract_data(struct rxperf_call *call, bool want_more) +{ + u32 remote_abort = 0; + int ret; + + ret = rxrpc_kernel_recv_data(rxperf_socket, call->rxcall, &call->iter, + &call->iov_len, want_more, &remote_abort, + &call->service_id); + pr_debug("Extract i=%zu l=%zu m=%u ret=%d\n", + iov_iter_count(&call->iter), call->iov_len, want_more, ret); + if (ret == 0 || ret == -EAGAIN) + return ret; + + if (ret == 1) { + switch (call->state) { + case RXPERF_CALL_SV_AWAIT_REQUEST: + rxperf_set_call_state(call, RXPERF_CALL_SV_REPLYING); + break; + case RXPERF_CALL_COMPLETE: + pr_debug("premature completion %d", call->error); + return call->error; + default: + break; + } + return 0; + } + + rxperf_set_call_complete(call, ret, remote_abort); + return ret; +} + +/* + * Grab the operation ID from an incoming manager call. + */ +static int rxperf_deliver_param_block(struct rxperf_call *call) +{ + u32 version; + int ret; + + /* Extract the parameter block */ + ret = rxperf_extract_data(call, true); + if (ret < 0) + return ret; + + version = ntohl(call->params.version); + call->operation_id = ntohl(call->params.type); + call->deliver = rxperf_deliver_request; + + if (version != RX_PERF_VERSION) { + pr_info("Version mismatch %x\n", version); + return -ENOTSUPP; + } + + switch (call->operation_id) { + case RX_PERF_SEND: + call->type = "send"; + call->reply_len = 0; + call->iov_len = 4; /* Expect req size */ + break; + case RX_PERF_RECV: + call->type = "recv"; + call->req_len = 0; + call->iov_len = 4; /* Expect reply size */ + break; + case RX_PERF_RPC: + call->type = "rpc"; + call->iov_len = 8; /* Expect req size and reply size */ + break; + case RX_PERF_FILE: + call->type = "file"; + fallthrough; + default: + return -EOPNOTSUPP; + } + + rxperf_set_call_state(call, RXPERF_CALL_SV_AWAIT_REQUEST); + return call->deliver(call); +} + +/* + * Deliver the request data. + */ +static int rxperf_deliver_request(struct rxperf_call *call) +{ + int ret; + + switch (call->unmarshal) { + case 0: + call->kvec[0].iov_len = call->iov_len; + call->kvec[0].iov_base = call->tmp; + iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len); + call->unmarshal++; + fallthrough; + case 1: + ret = rxperf_extract_data(call, true); + if (ret < 0) + return ret; + + switch (call->operation_id) { + case RX_PERF_SEND: + call->type = "send"; + call->req_len = ntohl(call->tmp[0]); + call->reply_len = 0; + break; + case RX_PERF_RECV: + call->type = "recv"; + call->req_len = 0; + call->reply_len = ntohl(call->tmp[0]); + break; + case RX_PERF_RPC: + call->type = "rpc"; + call->req_len = ntohl(call->tmp[0]); + call->reply_len = ntohl(call->tmp[1]); + break; + default: + pr_info("Can't parse extra params\n"); + return -EIO; + } + + pr_debug("CALL op=%s rq=%zx rp=%zx\n", + call->type, call->req_len, call->reply_len); + + call->iov_len = call->req_len; + iov_iter_discard(&call->iter, READ, call->req_len); + call->unmarshal++; + fallthrough; + case 2: + ret = rxperf_extract_data(call, false); + if (ret < 0) + return ret; + call->unmarshal++; + fallthrough; + default: + return 0; + } +} + +/* + * Process a call for which we've received the request. + */ +static int rxperf_process_call(struct rxperf_call *call) +{ + struct msghdr msg = {}; + struct bio_vec bv[1]; + struct kvec iov[1]; + ssize_t n; + size_t reply_len = call->reply_len, len; + + rxrpc_kernel_set_tx_length(rxperf_socket, call->rxcall, + reply_len + sizeof(rxperf_magic_cookie)); + + while (reply_len > 0) { + len = min(reply_len, PAGE_SIZE); + bv[0].bv_page = ZERO_PAGE(0); + bv[0].bv_offset = 0; + bv[0].bv_len = len; + iov_iter_bvec(&msg.msg_iter, WRITE, bv, 1, len); + msg.msg_flags = MSG_MORE; + n = rxrpc_kernel_send_data(rxperf_socket, call->rxcall, &msg, + len, rxperf_notify_end_reply_tx); + if (n < 0) + return n; + if (n == 0) + return -EIO; + reply_len -= n; + } + + len = sizeof(rxperf_magic_cookie); + iov[0].iov_base = (void *)rxperf_magic_cookie; + iov[0].iov_len = len; + iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len); + msg.msg_flags = 0; + n = rxrpc_kernel_send_data(rxperf_socket, call->rxcall, &msg, len, + rxperf_notify_end_reply_tx); + if (n >= 0) + return 0; /* Success */ + + if (n == -ENOMEM) + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, + RXGEN_SS_MARSHAL, -ENOMEM, "GOM"); + return n; +} + +/* + * Add a key to the security keyring. + */ +static int rxperf_add_key(struct key *keyring) +{ + key_ref_t kref; + int ret; + + kref = key_create_or_update(make_key_ref(keyring, true), + "rxrpc_s", + __stringify(RX_PERF_SERVICE) ":2", + secret, + sizeof(secret), + KEY_POS_VIEW | KEY_POS_READ | KEY_POS_SEARCH + | KEY_USR_VIEW, + KEY_ALLOC_NOT_IN_QUOTA); + + if (IS_ERR(kref)) { + pr_err("Can't allocate rxperf server key: %ld\n", PTR_ERR(kref)); + return PTR_ERR(kref); + } + + ret = key_link(keyring, key_ref_to_ptr(kref)); + if (ret < 0) + pr_err("Can't link rxperf server key: %d\n", ret); + key_ref_put(kref); + return ret; +} + +/* + * Initialise the rxperf server. + */ +static int __init rxperf_init(void) +{ + struct key *keyring; + int ret = -ENOMEM; + + pr_info("Server registering\n"); + + rxperf_workqueue = alloc_workqueue("rxperf", 0, 0); + if (!rxperf_workqueue) + goto error_workqueue; + + keyring = keyring_alloc("rxperf_server", + GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(), + KEY_POS_VIEW | KEY_POS_READ | KEY_POS_SEARCH | + KEY_POS_WRITE | + KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH | + KEY_USR_WRITE | + KEY_OTH_VIEW | KEY_OTH_READ | KEY_OTH_SEARCH, + KEY_ALLOC_NOT_IN_QUOTA, + NULL, NULL); + if (IS_ERR(keyring)) { + pr_err("Can't allocate rxperf server keyring: %ld\n", + PTR_ERR(keyring)); + goto error_keyring; + } + rxperf_sec_keyring = keyring; + ret = rxperf_add_key(keyring); + if (ret < 0) + goto error_key; + + ret = rxperf_open_socket(); + if (ret < 0) + goto error_socket; + return 0; + +error_socket: +error_key: + key_put(rxperf_sec_keyring); +error_keyring: + destroy_workqueue(rxperf_workqueue); + rcu_barrier(); +error_workqueue: + pr_err("Failed to register: %d\n", ret); + return ret; +} +late_initcall(rxperf_init); /* Must be called after net/ to create socket */ + +static void __exit rxperf_exit(void) +{ + pr_info("Server unregistering.\n"); + + rxperf_close_socket(); + key_put(rxperf_sec_keyring); + destroy_workqueue(rxperf_workqueue); + rcu_barrier(); +} +module_exit(rxperf_exit); + diff --git a/net/rxrpc/server_key.c b/net/rxrpc/server_key.c index ee269e0e6ee8..e51940589ee5 100644 --- a/net/rxrpc/server_key.c +++ b/net/rxrpc/server_key.c @@ -144,3 +144,28 @@ int rxrpc_server_keyring(struct rxrpc_sock *rx, sockptr_t optval, int optlen) _leave(" = 0 [key %x]", key->serial); return 0; } + +/** + * rxrpc_sock_set_security_keyring - Set the security keyring for a kernel service + * @sk: The socket to set the keyring on + * @keyring: The keyring to set + * + * Set the server security keyring on an rxrpc socket. This is used to provide + * the encryption keys for a kernel service. + */ +int rxrpc_sock_set_security_keyring(struct sock *sk, struct key *keyring) +{ + struct rxrpc_sock *rx = rxrpc_sk(sk); + int ret = 0; + + lock_sock(sk); + if (rx->securities) + ret = -EINVAL; + else if (rx->sk.sk_state != RXRPC_UNBOUND) + ret = -EISCONN; + else + rx->securities = key_get(keyring); + release_sock(sk); + return ret; +} +EXPORT_SYMBOL(rxrpc_sock_set_security_keyring); From patchwork Wed Nov 30 16:54:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27886 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038774wrr; Wed, 30 Nov 2022 09:00:17 -0800 (PST) X-Google-Smtp-Source: AA0mqf6RkPVAucQDcwnlZE/3R+mFSg9iYaD/weN1JPlx/VjUoJYHxzPoWO0HuxWngadcu9PtxCv9 X-Received: by 2002:aa7:871a:0:b0:575:d557:6f50 with SMTP id b26-20020aa7871a000000b00575d5576f50mr4230944pfo.29.1669827616434; Wed, 30 Nov 2022 09:00:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827616; cv=none; d=google.com; s=arc-20160816; b=d20Fb7UCuaqwT2pom9gNt3abHo/bYIxQ0ytmvu4LDruMOWY1e1A/6d5+5bRVyQjmQ7 Y3W/8nrj9S6vmRdUFhW27U80Gb7PJg00raDYWSlU3zUVMfxDqrCmSHOLdiAYqNxX2hUn 9+suVpEy2vhKCa4ngL5VVYBghlEq8Ce7bwVMEq7pgXpoifvaJmLTXmaTQwOhcpAyYles LghRnMUQQ/rU81NYsDyi7uMV3RwQUPPjGEKG/lG2biYDe+fuRTQ1uxEE1jXZFkqlCSHb 5nSS78DyRBAxvnB8Hk5yzOOrMU03vJQnv8Rg3ieLXiGEhmxxaYTZkoG8Isyfp0GXmU+q ZvXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=tZhNZFams/h1M5Oz8q+H1JnK8KioZ8hbpqzvIEfUKEk=; b=DZyaPoIvGfvGZmgnitpxyHB9g+8etnjyQv8Mw/iSLm78ZEpQSZV3oosOm7k/V1H228 UxAa6ox+yHb0WuQTrDrT0f1E+iSOq0uflV9dxn5SSMgjbKQkJsXPlPOE/sjwsmL+hu// sogRtn5qKEM0kTpGUmC6HSN/z8LvCR3a5hEkZY2JoXEYIFWufc/BxR6lWEeSzAfLc6L2 J0z6fY8WqKzgP52mY3Ol037iged9GIkmJSwvNcqktRttnE94gFjWDM7uqBnOS4YGPvkq BTuubhfn3lY/fcymRnuAiGBNbujp/DTTQkXbLnar+UG8SqSikzOGnl/K2ZR3EnujxRjC +0JQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DVj5vVNn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 199-20020a6302d0000000b0047776c01d71si1896326pgc.375.2022.11.30.09.00.03; Wed, 30 Nov 2022 09:00:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DVj5vVNn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230423AbiK3Q4L (ORCPT + 99 others); Wed, 30 Nov 2022 11:56:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230426AbiK3Qz7 (ORCPT ); Wed, 30 Nov 2022 11:55:59 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B134A7E431 for ; Wed, 30 Nov 2022 08:54:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827297; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tZhNZFams/h1M5Oz8q+H1JnK8KioZ8hbpqzvIEfUKEk=; b=DVj5vVNnprg4MnpqFfHMD/R5hCMIUBeBZH33W5iR+XPbm6sBQ5Nuyn1g5zxXtmkW//0Hfj zyoI9dumj0e4wuRCeCZE9gzcH9NFuM/vbVL4GPxpVlDcILBO8s0kDAe5+7lljuAEFLgNPj Fdk5aOhAcKe1dTIEb92RNIViSRNESwU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-244-7T1dIkhGMpCDCP97tHXzpA-1; Wed, 30 Nov 2022 11:54:39 -0500 X-MC-Unique: 7T1dIkhGMpCDCP97tHXzpA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5790B29DD99C; Wed, 30 Nov 2022 16:54:38 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 39EAA14152E8; Wed, 30 Nov 2022 16:54:37 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 02/35] rxrpc: Fix call leak From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:54:34 +0000 Message-ID: <166982727463.621383.1222840539683065766.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941162933667062?= X-GMAIL-MSGID: =?utf-8?q?1750941162933667062?= When retransmitting a packet, rxrpc_resend() shouldn't be attaching a ref to the call to the txbuf as that pins the call and prevents the call from clearing the packet buffer. Signed-off-by: David Howells Fixes: d57a3a151660 ("rxrpc: Save last ACK's SACK table rather than marking txbufs") cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/call_event.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 1e21a708390e..349f3df569ba 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -198,7 +198,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) if (list_empty(&txb->tx_link)) { rxrpc_get_txbuf(txb, rxrpc_txbuf_get_retrans); - rxrpc_get_call(call, rxrpc_call_got_tx); list_add_tail(&txb->tx_link, &retrans_queue); set_bit(RXRPC_TXBUF_RESENT, &txb->flags); } From patchwork Wed Nov 30 16:54:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27878 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038389wrr; Wed, 30 Nov 2022 08:59:31 -0800 (PST) X-Google-Smtp-Source: AA0mqf43gLPOrgjHyayxhkq55uXJe9U5paIspOx3utIk7GUwy5kmZJZRlohwUPXyQptUgIOEPkQM X-Received: by 2002:a17:90a:ae01:b0:213:e8b5:2d50 with SMTP id t1-20020a17090aae0100b00213e8b52d50mr66143971pjq.211.1669827571318; Wed, 30 Nov 2022 08:59:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827571; cv=none; d=google.com; s=arc-20160816; b=fNE1fCusDAYLecMnJVItOiwE3x4DDRc8pDy07nfg/rWQt4t/9yD2F6JMXXuBMfXQr1 7bUYAInYs0IZOuYhpXODzrqlTUuwm+PV3jf5+g2bZo/VeqsuhLVwEph9qNdPMODRqjac faq2C58PlCnU4btNBniH3ASx2TER/L8s4X9nT/gXrKTASeZKZbSfzaC4myYV0OZTrh8H JJlhgb/vFhLTKfGRmRVZhefmC2gWObx+iP6CM3HboFzbDWuGXtGGD8waBNgfxPrjbad/ eLzKIMUipWMnt/Zs6lj++SouKiWEz89p5pbCGembUi0jF3AML7nOtLTjm8Bcj85R7egC e/3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=OVOQDiX6kbAmxrS487im8wmNPjye/rpt4SKfq1Eh8HI=; b=DDbjAnPJ7+vIPzNgSUQ75xnaHTqPe/uhBUKIPy/Be2aKykDaI6vUQlv6RaRgdIX7GB lNWCCrOm9/zdiH37FOiKuztqhuMAYbo8ByvygxlTjkO+Tly/UcUdiIiuQjx3qcyX7GRK Nekbro+U6bGcGSntyVGdHu5DhLPIFKxCghWWD5dq7O9RNwK/8+8rRdziF7LS9eLNaik6 v1Q5bDn1L645pVtp14vi1C1uYcK2GhE7n2NIRvl9VjfxkjeGopA1ka2J+IDtO5aDezQV jHDfNWpCnJezTYvPi/RjCuUWOcmj7wxTc10AM6SPG/Lkweiq80nHlmHdPuF+wzQCZnJ8 kGLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DfND6ydq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 190-20020a6205c7000000b00573a3e2990csi2037395pff.40.2022.11.30.08.59.18; Wed, 30 Nov 2022 08:59:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DfND6ydq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229794AbiK3Q4Q (ORCPT + 99 others); Wed, 30 Nov 2022 11:56:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbiK3Qz7 (ORCPT ); Wed, 30 Nov 2022 11:55:59 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69C018BD1B for ; Wed, 30 Nov 2022 08:54:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827291; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OVOQDiX6kbAmxrS487im8wmNPjye/rpt4SKfq1Eh8HI=; b=DfND6ydqv+dMnAje8NtsFKM42gwNE4jle6irzoGkoacxVWjW/qRqNuMKEo4eyzC06t55I7 DQZFno/Rjqvn0yehIlBxSU/cXGTe3GKBTJ0B+DQjQwo36aB+PPeWdu/4SEclLRrRbXXt4L HGRkKsxAYEYZ+JGha+5qBNVfHCkyjiI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-203-AOL1VYblOZunPK_TWN8pDg-1; Wed, 30 Nov 2022 11:54:47 -0500 X-MC-Unique: AOL1VYblOZunPK_TWN8pDg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DE82029DD9A6; Wed, 30 Nov 2022 16:54:46 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 445351415119; Wed, 30 Nov 2022 16:54:46 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 03/35] rxrpc: Remove decl for rxrpc_kernel_call_is_complete() From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:54:43 +0000 Message-ID: <166982728350.621383.8149292270575612199.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941115652957517?= X-GMAIL-MSGID: =?utf-8?q?1750941115652957517?= rxrpc_kernel_call_is_complete() has been removed, so remove its declaration too. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/net/af_rxrpc.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h index dc033f08191e..d5a5ae926380 100644 --- a/include/net/af_rxrpc.h +++ b/include/net/af_rxrpc.h @@ -66,7 +66,6 @@ int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t, void rxrpc_kernel_set_tx_length(struct socket *, struct rxrpc_call *, s64); bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *); u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *); -bool rxrpc_kernel_call_is_complete(struct rxrpc_call *); void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *, unsigned long); From patchwork Wed Nov 30 16:54:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27880 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038508wrr; Wed, 30 Nov 2022 08:59:47 -0800 (PST) X-Google-Smtp-Source: AA0mqf5MrJpROzyWv9lTL6uvU7XW7RNS34n/wXA20WZFzcPCed0F0vxRHogRXUEsFBLQUiexN4zh X-Received: by 2002:a17:906:5052:b0:7a9:6107:572a with SMTP id e18-20020a170906505200b007a96107572amr53168263ejk.729.1669827586963; Wed, 30 Nov 2022 08:59:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827586; cv=none; d=google.com; s=arc-20160816; b=w/VIVYiEmxuFaEdh9PPltrJSGVFPbRJx75ilcEa2XsJqVPjvxYCVAS+Jpp0M7nwi3f FR95QxocGKWv0UsJHgcGBGH9XRhRmDLMTUJ2YxPROlLfAtYLRNmV14gRLhdfZ4cwbaxt BcXbzYGCCqDMkHwTEa/LDfbyA8wmAcqXhOCF8YRbiyiR5lvcn1d0zCfQcEmG9NdSkP1n byiz5QNGT4Lm3LTVFTrS0zKNEtyEytdqu3z8Hin+CWcR05awYBbIDN/1i3jLBwY7DNrf KoXLaxo19CMiTksPFicJ/fSplwDNqKXM4xgepKANc5UsohxXYMh8mpLntXUkvpJfus25 f70w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=dnHFfz7r7prpFSiz+JpSCCTL+bVoik9Funx4d+jmueA=; b=xN/8saeFnSfwv7kyte3qStpZubAbB/cK/RgF4b+mcEjp+xeIpsR0NqkIp8auftzhB6 lkMjYih2vgtJ28At1aLGK0rTQj7lWdYpun0BCLRxcjHTvn96LJzS8gK1thq/eE/xs7VQ evmaTlrYPLadhDiFsNwvAR0++xaaZ+/rBkS8BPzMURoMwBM5RwobROxV3SrQzxA/dESn OcYBhTcn7Cdh29hs8FBTObYN1buclTYeZhysA6w5qNSnWo1o5MlLHat+YYRwSGosBEPb p+Zx0TBqyndvYIi2Os0+VEilIGGECpG6UqzwVKsxjMOIclnW7FR8ndPoJkB/jK6lWInw Xbyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=adzjSgfO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y68-20020a50bb4a000000b00461a32e0e38si1611562ede.306.2022.11.30.08.59.21; Wed, 30 Nov 2022 08:59:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=adzjSgfO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230385AbiK3Q4U (ORCPT + 99 others); Wed, 30 Nov 2022 11:56:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230097AbiK3Q4D (ORCPT ); Wed, 30 Nov 2022 11:56:03 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A96BA8DFC3 for ; Wed, 30 Nov 2022 08:54:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dnHFfz7r7prpFSiz+JpSCCTL+bVoik9Funx4d+jmueA=; b=adzjSgfOP7Rvtd2JsHqImZMnTMsAkgyqT2gcaTObF5a0jR2QO2mxj/IUB3PmiOUMpSMxDO eVrY2+pP/U537VeDu61KkushRKTB2A3jYPvxtv+qcZ06ds8bgqAT0Yb7zHGGJEDkN+mn8n Nf4KvTmvkmwYfbEWfhAPjH8rdNv/zUI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-75-yZ_DkXHZNtqazpQ2wcnpmw-1; Wed, 30 Nov 2022 11:54:56 -0500 X-MC-Unique: yZ_DkXHZNtqazpQ2wcnpmw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8FEB129DD9A3; Wed, 30 Nov 2022 16:54:55 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id E9A0B2023A16; Wed, 30 Nov 2022 16:54:54 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 04/35] rxrpc: Remove handling of duplicate packets in recvmsg_queue From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:54:52 +0000 Message-ID: <166982729206.621383.16271295827069466547.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941131648428519?= X-GMAIL-MSGID: =?utf-8?q?1750941131648428519?= We should not now see duplicate packets in the recvmsg_queue. At one point, jumbo packets that overlapped with already queued data would be added to the queue and dealt with in recvmsg rather than in the softirq input code, but now jumbo packets are split/cloned before being processed by the input code and the subpackets can be discarded individually. So remove the recvmsg-side code for handling this. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/recvmsg.c | 18 ------------------ 1 file changed, 18 deletions(-) diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index efb85f983657..134122f5961a 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -228,7 +228,6 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) _enter("%d", call->debug_id); -further_rotation: skb = skb_dequeue(&call->recvmsg_queue); rxrpc_see_skb(skb, rxrpc_skb_rotated); @@ -250,17 +249,6 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) return; } - /* The next packet on the queue might entirely overlap with the one we - * just consumed; if so, rotate that away also. - */ - skb = skb_peek(&call->recvmsg_queue); - if (skb) { - sp = rxrpc_skb(skb); - if (sp->hdr.seq != call->rx_consumed && - after_eq(call->rx_consumed, sp->hdr.seq)) - goto further_rotation; - } - /* Check to see if there's an ACK that needs sending. */ acked = atomic_add_return(call->rx_consumed - old_consumed, &call->ackr_nr_consumed); @@ -318,11 +306,6 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, sp = rxrpc_skb(skb); seq = sp->hdr.seq; - if (after_eq(call->rx_consumed, seq)) { - kdebug("obsolete %x %x", call->rx_consumed, seq); - goto skip_obsolete; - } - if (!(flags & MSG_PEEK)) trace_rxrpc_receive(call, rxrpc_receive_front, sp->hdr.serial, seq); @@ -373,7 +356,6 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, break; } - skip_obsolete: /* The whole packet has been transferred. */ if (sp->hdr.flags & RXRPC_LAST_PACKET) ret = 1; From patchwork Wed Nov 30 16:55:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27890 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1039876wrr; Wed, 30 Nov 2022 09:01:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf6qWPagtimQE8oDm95xe++6tFpENQYj033FtBRyefREfCzHEZ1CeI4EBSZu06lLihfBmhi+ X-Received: by 2002:a17:902:7fca:b0:189:9310:f626 with SMTP id t10-20020a1709027fca00b001899310f626mr12912502plb.109.1669827708158; Wed, 30 Nov 2022 09:01:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827708; cv=none; d=google.com; s=arc-20160816; b=qDXOkSamxEi27DYYEvbo1yvpwzDSiO+aW7KZ6p3IwvhhJ7biOZgvy6Q8sH8U2cTwDm 1u/Q7C5Bsh+8l7NmoHKLh6OWvP26Uu/svPwIQE1jWgkJ6VS2L3A2TcvkthEl38h+zJTL /qOIPqv/D0PTixs3VyaIMJWlPsdpN0x3dmaWI3I9R1HGS6ae57iRfCVrG+xGdgsqupQp KZ3JwWnc0WkyGmlYhCs8eEmEYA6kZGv8hXAapItUVPwQzofQKy87IulJBMvcmM+1yCQg zZngyqQ9GtAGMdOeIf5PLvOROTBT5ozv5D/5ZocZyp+OPALI9nWMRBkqb8sMkeO+/xta WgLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=ELwEvoqcLtONqcVmTzyiWYOh4/BRyQ//XCTjBm7n/as=; b=SBpWNwhFb+DkdTUG2JGZMHbDLxZwPsMmrGRlLmDRRfOMwVF4YjasVs6PwschOuirFe Gmu2yHnMf3+ap4t+kbtUK41L11cvqaM9HKiCqZxyazg5Yo1kAnxoEA41am+WHIFyBYmF +2yoWNOpFzyv9SefENeFvV2+YEf6nR0/m2H7MgWXbKQ1EG1gcLuD1ILhDIvUn2uU7D5i y/gW7NiA8WrkPzhf4UIfmU7s2Jf2o8xkfz3rzIR5qgbU5b6t2ehnEks+4FnInV+F6mXm ytt7q1oo8qqaTmue25fxsTFHVlALfYAb1kmqdI5Dxv4uoENUe0HMbnPGwPu6+K9JDDTU Kurw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=E1DmALz3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h35-20020a63f923000000b0047712cb3fe3si1663048pgi.590.2022.11.30.09.01.33; Wed, 30 Nov 2022 09:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=E1DmALz3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230419AbiK3Q4l (ORCPT + 99 others); Wed, 30 Nov 2022 11:56:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230404AbiK3Q4H (ORCPT ); Wed, 30 Nov 2022 11:56:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 954098DFF5 for ; Wed, 30 Nov 2022 08:55:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827308; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ELwEvoqcLtONqcVmTzyiWYOh4/BRyQ//XCTjBm7n/as=; b=E1DmALz3l7MbMDGsnRD0ARKkBiyotyhSk5K3o4xKbdWE9lSWcbDzXiuB67pm4KjY9aLQWk T16aHTi9KMYIsggoFjyaDLArtrmosU8J1HyuaJvBoRw2N2dvyamg40/anAdB9SYl57YJ68 LdJ3lryFIEQl1GBTqDFvcN7OMFGiaf0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-623-QBErPwqRMhm2mMrJEZc66A-1; Wed, 30 Nov 2022 11:55:04 -0500 X-MC-Unique: QBErPwqRMhm2mMrJEZc66A-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5A1A2101A5AD; Wed, 30 Nov 2022 16:55:04 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 82FF540C6EC4; Wed, 30 Nov 2022 16:55:03 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 05/35] rxrpc: Remove the [k_]proto() debugging macros From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:55:00 +0000 Message-ID: <166982730073.621383.6529596529186180273.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941258779959875?= X-GMAIL-MSGID: =?utf-8?q?1750941258779959875?= Remove the kproto() and _proto() debugging macros in preference to using tracepoints for this. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 60 ++++++++++++++++++++++++++++++++++++++++++ net/rxrpc/ar-internal.h | 10 ------- net/rxrpc/conn_event.c | 4 --- net/rxrpc/input.c | 17 ------------ net/rxrpc/local_event.c | 3 -- net/rxrpc/output.c | 2 - net/rxrpc/peer_event.c | 4 --- net/rxrpc/rxkad.c | 9 ++---- 8 files changed, 63 insertions(+), 46 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index b9886d1df825..2b77f9a75bf7 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -733,6 +733,66 @@ TRACE_EVENT(rxrpc_rx_abort, __entry->abort_code) ); +TRACE_EVENT(rxrpc_rx_challenge, + TP_PROTO(struct rxrpc_connection *conn, rxrpc_serial_t serial, + u32 version, u32 nonce, u32 min_level), + + TP_ARGS(conn, serial, version, nonce, min_level), + + TP_STRUCT__entry( + __field(unsigned int, conn ) + __field(rxrpc_serial_t, serial ) + __field(u32, version ) + __field(u32, nonce ) + __field(u32, min_level ) + ), + + TP_fast_assign( + __entry->conn = conn->debug_id; + __entry->serial = serial; + __entry->version = version; + __entry->nonce = nonce; + __entry->min_level = min_level; + ), + + TP_printk("C=%08x CHALLENGE %08x v=%x n=%x ml=%x", + __entry->conn, + __entry->serial, + __entry->version, + __entry->nonce, + __entry->min_level) + ); + +TRACE_EVENT(rxrpc_rx_response, + TP_PROTO(struct rxrpc_connection *conn, rxrpc_serial_t serial, + u32 version, u32 kvno, u32 ticket_len), + + TP_ARGS(conn, serial, version, kvno, ticket_len), + + TP_STRUCT__entry( + __field(unsigned int, conn ) + __field(rxrpc_serial_t, serial ) + __field(u32, version ) + __field(u32, kvno ) + __field(u32, ticket_len ) + ), + + TP_fast_assign( + __entry->conn = conn->debug_id; + __entry->serial = serial; + __entry->version = version; + __entry->kvno = kvno; + __entry->ticket_len = ticket_len; + ), + + TP_printk("C=%08x RESPONSE %08x v=%x kvno=%x tl=%x", + __entry->conn, + __entry->serial, + __entry->version, + __entry->kvno, + __entry->ticket_len) + ); + TRACE_EVENT(rxrpc_rx_rwind_change, TP_PROTO(struct rxrpc_call *call, rxrpc_serial_t serial, u32 rwind, bool wake), diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index f5c538ce3e23..a3a29390e12b 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -1190,7 +1190,6 @@ extern unsigned int rxrpc_debug; #define kenter(FMT,...) dbgprintk("==> %s("FMT")",__func__ ,##__VA_ARGS__) #define kleave(FMT,...) dbgprintk("<== %s()"FMT"",__func__ ,##__VA_ARGS__) #define kdebug(FMT,...) dbgprintk(" "FMT ,##__VA_ARGS__) -#define kproto(FMT,...) dbgprintk("### "FMT ,##__VA_ARGS__) #define knet(FMT,...) dbgprintk("@@@ "FMT ,##__VA_ARGS__) @@ -1198,14 +1197,12 @@ extern unsigned int rxrpc_debug; #define _enter(FMT,...) kenter(FMT,##__VA_ARGS__) #define _leave(FMT,...) kleave(FMT,##__VA_ARGS__) #define _debug(FMT,...) kdebug(FMT,##__VA_ARGS__) -#define _proto(FMT,...) kproto(FMT,##__VA_ARGS__) #define _net(FMT,...) knet(FMT,##__VA_ARGS__) #elif defined(CONFIG_AF_RXRPC_DEBUG) #define RXRPC_DEBUG_KENTER 0x01 #define RXRPC_DEBUG_KLEAVE 0x02 #define RXRPC_DEBUG_KDEBUG 0x04 -#define RXRPC_DEBUG_KPROTO 0x08 #define RXRPC_DEBUG_KNET 0x10 #define _enter(FMT,...) \ @@ -1226,12 +1223,6 @@ do { \ kdebug(FMT,##__VA_ARGS__); \ } while (0) -#define _proto(FMT,...) \ -do { \ - if (unlikely(rxrpc_debug & RXRPC_DEBUG_KPROTO)) \ - kproto(FMT,##__VA_ARGS__); \ -} while (0) - #define _net(FMT,...) \ do { \ if (unlikely(rxrpc_debug & RXRPC_DEBUG_KNET)) \ @@ -1242,7 +1233,6 @@ do { \ #define _enter(FMT,...) no_printk("==> %s("FMT")",__func__ ,##__VA_ARGS__) #define _leave(FMT,...) no_printk("<== %s()"FMT"",__func__ ,##__VA_ARGS__) #define _debug(FMT,...) no_printk(" "FMT ,##__VA_ARGS__) -#define _proto(FMT,...) no_printk("### "FMT ,##__VA_ARGS__) #define _net(FMT,...) no_printk("@@@ "FMT ,##__VA_ARGS__) #endif diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index aab069701398..d5549cbfc71b 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -122,14 +122,12 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, switch (chan->last_type) { case RXRPC_PACKET_TYPE_ABORT: - _proto("Tx ABORT %%%u { %d } [re]", serial, conn->abort_code); break; case RXRPC_PACKET_TYPE_ACK: trace_rxrpc_tx_ack(chan->call_debug_id, serial, ntohl(pkt.ack.firstPacket), ntohl(pkt.ack.serial), pkt.ack.reason, 0); - _proto("Tx ACK %%%u [re]", serial); break; } @@ -242,7 +240,6 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, serial = atomic_inc_return(&conn->serial); rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial); whdr.serial = htonl(serial); - _proto("Tx CONN ABORT %%%u { %d }", serial, conn->abort_code); ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); if (ret < 0) { @@ -315,7 +312,6 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, return -EPROTO; } abort_code = ntohl(wtmp); - _proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code); conn->error = -ECONNABORTED; conn->abort_code = abort_code; diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index bdf70b81addc..646ee61af40e 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -551,9 +551,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) atomic64_read(&call->ackr_window), call->rx_highest_seq, skb->len, seq0); - _proto("Rx DATA %%%u { #%u f=%02x }", - sp->hdr.serial, seq0, sp->hdr.flags); - state = READ_ONCE(call->state); if (state >= RXRPC_CALL_COMPLETE) { rxrpc_free_skb(skb, rxrpc_skb_freed); @@ -708,11 +705,6 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb, bool wake = false; u32 rwind = ntohl(ackinfo->rwind); - _proto("Rx ACK %%%u Info { rx=%u max=%u rwin=%u jm=%u }", - sp->hdr.serial, - ntohl(ackinfo->rxMTU), ntohl(ackinfo->maxMTU), - rwind, ntohl(ackinfo->jumbo_max)); - if (rwind > RXRPC_TX_MAX_WINDOW) rwind = RXRPC_TX_MAX_WINDOW; if (call->tx_winsize != rwind) { @@ -855,7 +847,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) } if (ack.reason == RXRPC_ACK_PING) { - _proto("Rx ACK %%%u PING Request", ack_serial); rxrpc_send_ACK(call, RXRPC_ACK_PING_RESPONSE, ack_serial, rxrpc_propose_ack_respond_to_ping); } else if (sp->hdr.flags & RXRPC_REQUEST_ACK) { @@ -1014,9 +1005,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_ack_summary summary = { 0 }; - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - - _proto("Rx ACKALL %%%u", sp->hdr.serial); spin_lock(&call->input_lock); @@ -1044,8 +1032,6 @@ static void rxrpc_input_abort(struct rxrpc_call *call, struct sk_buff *skb) trace_rxrpc_rx_abort(call, sp->hdr.serial, abort_code); - _proto("Rx ABORT %%%u { %x }", sp->hdr.serial, abort_code); - rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, abort_code, -ECONNABORTED); } @@ -1081,8 +1067,6 @@ static void rxrpc_input_call_packet(struct rxrpc_call *call, goto no_free; case RXRPC_PACKET_TYPE_BUSY: - _proto("Rx BUSY %%%u", sp->hdr.serial); - /* Just ignore BUSY packets from the server; the retry and * lifespan timers will take care of business. BUSY packets * from the client don't make sense. @@ -1325,7 +1309,6 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) goto discard; default: - _proto("Rx Bad Packet Type %u", sp->hdr.type); goto bad_message; } diff --git a/net/rxrpc/local_event.c b/net/rxrpc/local_event.c index 19e929c7c38b..f23a3fbabbda 100644 --- a/net/rxrpc/local_event.c +++ b/net/rxrpc/local_event.c @@ -63,8 +63,6 @@ static void rxrpc_send_version_request(struct rxrpc_local *local, len = iov[0].iov_len + iov[1].iov_len; - _proto("Tx VERSION (reply)"); - ret = kernel_sendmsg(local->socket, &msg, iov, 2, len); if (ret < 0) trace_rxrpc_tx_fail(local->debug_id, 0, ret, @@ -98,7 +96,6 @@ void rxrpc_process_local_events(struct rxrpc_local *local) if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), &v, 1) < 0) return; - _proto("Rx VERSION { %02x }", v); if (v == 0) rxrpc_send_version_request(local, &sp->hdr, skb); break; diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index c5eed0e83e47..635acf3dbd77 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -701,8 +701,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *peer) len = iov[0].iov_len + iov[1].iov_len; - _proto("Tx VERSION (keepalive)"); - iov_iter_kvec(&msg.msg_iter, WRITE, iov, 2, len); ret = do_udp_sendmsg(peer->local->socket, &msg, len); if (ret < 0) diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index cda3890657a9..be781c156e89 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -253,15 +253,12 @@ static void rxrpc_store_error(struct rxrpc_peer *peer, break; default: - _proto("Rx Received ICMP error { type=%u code=%u }", - ee->ee_type, ee->ee_code); break; } break; case SO_EE_ORIGIN_NONE: case SO_EE_ORIGIN_LOCAL: - _proto("Rx Received local error { error=%d }", err); compl = RXRPC_CALL_LOCAL_ERROR; break; @@ -270,7 +267,6 @@ static void rxrpc_store_error(struct rxrpc_peer *peer, err = EHOSTUNREACH; fallthrough; default: - _proto("Rx Received error report { orig=%u }", ee->ee_origin); break; } diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index 110a5550c0a6..36cf40442a7e 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -704,7 +704,6 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn) serial = atomic_inc_return(&conn->serial); whdr.serial = htonl(serial); - _proto("Tx CHALLENGE %%%u", serial); ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); if (ret < 0) { @@ -762,7 +761,6 @@ static int rxkad_send_response(struct rxrpc_connection *conn, serial = atomic_inc_return(&conn->serial); whdr.serial = htonl(serial); - _proto("Tx RESPONSE %%%u", serial); ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 3, len); if (ret < 0) { @@ -856,8 +854,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, nonce = ntohl(challenge.nonce); min_level = ntohl(challenge.min_level); - _proto("Rx CHALLENGE %%%u { v=%u n=%u ml=%u }", - sp->hdr.serial, version, nonce, min_level); + trace_rxrpc_rx_challenge(conn, sp->hdr.serial, version, nonce, min_level); eproto = tracepoint_string("chall_ver"); abort_code = RXKADINCONSISTENCY; @@ -1139,8 +1136,8 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, version = ntohl(response->version); ticket_len = ntohl(response->ticket_len); kvno = ntohl(response->kvno); - _proto("Rx RESPONSE %%%u { v=%u kv=%u tl=%u }", - sp->hdr.serial, version, kvno, ticket_len); + + trace_rxrpc_rx_response(conn, sp->hdr.serial, version, kvno, ticket_len); eproto = tracepoint_string("rxkad_rsp_ver"); abort_code = RXKADINCONSISTENCY; From patchwork Wed Nov 30 16:55:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27879 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038471wrr; Wed, 30 Nov 2022 08:59:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf5Xm6VCBFu5Uidywhi0uW/bop0x2tPybUSIx1fl/X2nNGeo0+3SW2OiyNYeWowe0ffhRszY X-Received: by 2002:a05:6a00:1c8f:b0:574:6880:e76f with SMTP id y15-20020a056a001c8f00b005746880e76fmr34779074pfw.35.1669827583662; Wed, 30 Nov 2022 08:59:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827583; cv=none; d=google.com; s=arc-20160816; b=R3cqB50I1g6YNVF8p3orz4s6m/Qn9kq+buwXulOHvQLsGwMcRAQwN/iTZ0/5O8tq/I nUpp6zxKW3Fh8bsMW/rgMZ/y4o3YUJXXTq95fk75AiWLDFtBRyeju6WNqaCoqsr2tDU0 G/Ytl594gBxoD99p/yOPPHPhfd921m2Yqm2YhizRsPZnXkfGSfnouqVClw+71m73jP+8 Z3FdJINA3t7iWLf9toM4zsVSKFZoJuDX6FKWrIthpjgtPpjDmtBLQy823x6T5TjbMWEO 9Qn5Jil35hz9yFeuWRk/1lManiyxQmRIkvbVgD8pqzsDT9IWWs/XTXWSKB3I+i22y4cz h/UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=TfU2vOhI2FJqWWTwtY56Qr3tqmx7ENd1hy5S4qU6G68=; b=H1rYLbcbNKf/Q43xOMNBili/l0Ws/NqnGd872MieI57mvx0eT8+VkKRnKI4GjAF/R1 bb1PM2RqBhGNZm6YisBHRmRLpyT0bWbrKQzY9ZMfXCGRk3kQ9WCjOUh64AIauvkXXKQj Nga+wxAsdho2Tqb+9uqF4h2HlQ3pbBMpNzAn1lcdBdRelgR+iobTEj9Z2SVYEB4z9gnP lqbeEgnZ6tH/3GB2vIvqfzi3Kpo8rGXloRkPcERgSO06zc/0v4TThQg1UTR0cLOLm//T I/bycWtqW3MzG11uEt0GX1/GByheYp2BEwg6qxwdzgIHlWD8nvJrZ7ulkWegG8/LLuq6 4Dhg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=II2zeNes; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g3-20020a631103000000b00477ca5b5617si1744467pgl.147.2022.11.30.08.59.30; Wed, 30 Nov 2022 08:59:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=II2zeNes; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229766AbiK3Q4s (ORCPT + 99 others); Wed, 30 Nov 2022 11:56:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230438AbiK3Q4T (ORCPT ); Wed, 30 Nov 2022 11:56:19 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 812868BD30 for ; Wed, 30 Nov 2022 08:55:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827316; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TfU2vOhI2FJqWWTwtY56Qr3tqmx7ENd1hy5S4qU6G68=; b=II2zeNeswEoC8LG5chLy8+T7aRbO5m7fPLGVEdfB/PZJCavDn0KLBqZBl75IUXzuivLCp2 RHbjUPtMRPCFn1piuRVuCwzsUPcW7F2fSvilwabNj2HuaORzsa+CULfaXJTohVW2lRlHFg TLeXhWOW2OlmdDwa1Q6QA859clme7/w= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-113-UxU8F81IM02y7tOXAELrUg-1; Wed, 30 Nov 2022 11:55:13 -0500 X-MC-Unique: UxU8F81IM02y7tOXAELrUg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 09E62381079D; Wed, 30 Nov 2022 16:55:13 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 474B3C15BA4; Wed, 30 Nov 2022 16:55:12 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 06/35] rxrpc: Remove the [_k]net() debugging macros From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:55:09 +0000 Message-ID: <166982730951.621383.821090767945998432.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941128639504493?= X-GMAIL-MSGID: =?utf-8?q?1750941128639504493?= Remove the _net() and knet() debugging macros in favour of tracepoints. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 10 ---------- net/rxrpc/call_object.c | 6 ------ net/rxrpc/conn_client.c | 2 -- net/rxrpc/conn_object.c | 2 -- net/rxrpc/conn_service.c | 2 -- net/rxrpc/input.c | 1 - net/rxrpc/local_object.c | 8 -------- net/rxrpc/peer_event.c | 48 ++-------------------------------------------- net/rxrpc/peer_object.c | 6 +----- 9 files changed, 3 insertions(+), 82 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index a3a29390e12b..36ece1efb1d4 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -1190,20 +1190,17 @@ extern unsigned int rxrpc_debug; #define kenter(FMT,...) dbgprintk("==> %s("FMT")",__func__ ,##__VA_ARGS__) #define kleave(FMT,...) dbgprintk("<== %s()"FMT"",__func__ ,##__VA_ARGS__) #define kdebug(FMT,...) dbgprintk(" "FMT ,##__VA_ARGS__) -#define knet(FMT,...) dbgprintk("@@@ "FMT ,##__VA_ARGS__) #if defined(__KDEBUG) #define _enter(FMT,...) kenter(FMT,##__VA_ARGS__) #define _leave(FMT,...) kleave(FMT,##__VA_ARGS__) #define _debug(FMT,...) kdebug(FMT,##__VA_ARGS__) -#define _net(FMT,...) knet(FMT,##__VA_ARGS__) #elif defined(CONFIG_AF_RXRPC_DEBUG) #define RXRPC_DEBUG_KENTER 0x01 #define RXRPC_DEBUG_KLEAVE 0x02 #define RXRPC_DEBUG_KDEBUG 0x04 -#define RXRPC_DEBUG_KNET 0x10 #define _enter(FMT,...) \ do { \ @@ -1223,17 +1220,10 @@ do { \ kdebug(FMT,##__VA_ARGS__); \ } while (0) -#define _net(FMT,...) \ -do { \ - if (unlikely(rxrpc_debug & RXRPC_DEBUG_KNET)) \ - knet(FMT,##__VA_ARGS__); \ -} while (0) - #else #define _enter(FMT,...) no_printk("==> %s("FMT")",__func__ ,##__VA_ARGS__) #define _leave(FMT,...) no_printk("<== %s()"FMT"",__func__ ,##__VA_ARGS__) #define _debug(FMT,...) no_printk(" "FMT ,##__VA_ARGS__) -#define _net(FMT,...) no_printk("@@@ "FMT ,##__VA_ARGS__) #endif /* diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 1befe22cd301..e36a317b2e9a 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -349,8 +349,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, rxrpc_start_call_timer(call); - _net("CALL new %d on CONN %d", call->debug_id, call->conn->debug_id); - _leave(" = %p [new]", call); return call; @@ -423,8 +421,6 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets); spin_unlock(&conn->params.peer->lock); - _net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id); - rxrpc_start_call_timer(call); _leave(""); } @@ -669,8 +665,6 @@ void rxrpc_cleanup_call(struct rxrpc_call *call) { struct rxrpc_txbuf *txb; - _net("DESTROY CALL %d", call->debug_id); - memset(&call->sock_node, 0xcd, sizeof(call->sock_node)); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index f11c97e28d2a..2b76fbffd4dd 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -541,8 +541,6 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, call->service_id = conn->service_id; trace_rxrpc_connect_call(call); - _net("CONNECT call %08x:%08x as call %d on conn %d", - call->cid, call->call_id, call->debug_id, conn->debug_id); write_lock_bh(&call->state_lock); call->state = RXRPC_CALL_CLIENT_SEND_REQUEST; diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 156bd26daf74..d5d15389406f 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -356,8 +356,6 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu) ASSERTCMP(refcount_read(&conn->ref), ==, 0); - _net("DESTROY CONN %d", conn->debug_id); - del_timer_sync(&conn->timer); rxrpc_purge_queue(&conn->rx_queue); diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index 6e6aa02c6f9e..75f903099eb0 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -184,8 +184,6 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, /* Make the connection a target for incoming packets. */ rxrpc_publish_service_conn(conn->params.peer, conn); - - _net("CONNECTION new %d {%x}", conn->debug_id, conn->proto.cid); } /* diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 646ee61af40e..e2461f29d765 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -725,7 +725,6 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb, peer->maxdata = mtu; peer->mtu = mtu + peer->hdrsize; spin_unlock_bh(&peer->lock); - _net("Net MTU %u (maxdata %u)", peer->mtu, peer->maxdata); } if (wake) diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index a943fdf91e24..11080c335d42 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -198,7 +198,6 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, struct rxrpc_local *local; struct rxrpc_net *rxnet = rxrpc_net(net); struct hlist_node *cursor; - const char *age; long diff; int ret; @@ -232,7 +231,6 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, if (!rxrpc_use_local(local)) break; - age = "old"; goto found; } @@ -250,14 +248,9 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, } else { hlist_add_head_rcu(&local->link, &rxnet->local_endpoints); } - age = "new"; found: mutex_unlock(&rxnet->local_mutex); - - _net("LOCAL %s %d {%pISp}", - age, local->debug_id, &local->srx.transport); - _leave(" = %p", local); return local; @@ -467,7 +460,6 @@ static void rxrpc_local_rcu(struct rcu_head *rcu) ASSERT(!work_pending(&local->processor)); - _net("DESTROY LOCAL %d", local->debug_id); kfree(local); _leave(""); } diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index be781c156e89..ad4d1769e02b 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -48,13 +48,11 @@ static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local, srx->transport.sin.sin_port = serr->port; switch (serr->ee.ee_origin) { case SO_EE_ORIGIN_ICMP: - _net("Rx ICMP"); memcpy(&srx->transport.sin.sin_addr, skb_network_header(skb) + serr->addr_offset, sizeof(struct in_addr)); break; case SO_EE_ORIGIN_ICMP6: - _net("Rx ICMP6 on v4 sock"); memcpy(&srx->transport.sin.sin_addr, skb_network_header(skb) + serr->addr_offset + 12, sizeof(struct in_addr)); @@ -70,14 +68,12 @@ static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local, case AF_INET6: switch (serr->ee.ee_origin) { case SO_EE_ORIGIN_ICMP6: - _net("Rx ICMP6"); srx->transport.sin6.sin6_port = serr->port; memcpy(&srx->transport.sin6.sin6_addr, skb_network_header(skb) + serr->addr_offset, sizeof(struct in6_addr)); break; case SO_EE_ORIGIN_ICMP: - _net("Rx ICMP on v6 sock"); srx->transport_len = sizeof(srx->transport.sin); srx->transport.family = AF_INET; srx->transport.sin.sin_port = serr->port; @@ -106,13 +102,9 @@ static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local, */ static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu) { - _net("Rx ICMP Fragmentation Needed (%d)", mtu); - /* wind down the local interface MTU */ - if (mtu > 0 && peer->if_mtu == 65535 && mtu < peer->if_mtu) { + if (mtu > 0 && peer->if_mtu == 65535 && mtu < peer->if_mtu) peer->if_mtu = mtu; - _net("I/F MTU %u", mtu); - } if (mtu == 0) { /* they didn't give us a size, estimate one */ @@ -133,8 +125,6 @@ static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu) peer->mtu = mtu; peer->maxdata = peer->mtu - peer->hdrsize; spin_unlock_bh(&peer->lock); - _net("Net MTU %u (maxdata %u)", - peer->mtu, peer->maxdata); } } @@ -222,41 +212,6 @@ static void rxrpc_store_error(struct rxrpc_peer *peer, err = ee->ee_errno; switch (ee->ee_origin) { - case SO_EE_ORIGIN_ICMP: - switch (ee->ee_type) { - case ICMP_DEST_UNREACH: - switch (ee->ee_code) { - case ICMP_NET_UNREACH: - _net("Rx Received ICMP Network Unreachable"); - break; - case ICMP_HOST_UNREACH: - _net("Rx Received ICMP Host Unreachable"); - break; - case ICMP_PORT_UNREACH: - _net("Rx Received ICMP Port Unreachable"); - break; - case ICMP_NET_UNKNOWN: - _net("Rx Received ICMP Unknown Network"); - break; - case ICMP_HOST_UNKNOWN: - _net("Rx Received ICMP Unknown Host"); - break; - default: - _net("Rx Received ICMP DestUnreach code=%u", - ee->ee_code); - break; - } - break; - - case ICMP_TIME_EXCEEDED: - _net("Rx Received ICMP TTL Exceeded"); - break; - - default: - break; - } - break; - case SO_EE_ORIGIN_NONE: case SO_EE_ORIGIN_LOCAL: compl = RXRPC_CALL_LOCAL_ERROR; @@ -266,6 +221,7 @@ static void rxrpc_store_error(struct rxrpc_peer *peer, if (err == EACCES) err = EHOSTUNREACH; fallthrough; + case SO_EE_ORIGIN_ICMP: default: break; } diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c index 041a51225c5f..b3c3c1c344fc 100644 --- a/net/rxrpc/peer_object.c +++ b/net/rxrpc/peer_object.c @@ -138,10 +138,8 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *local, unsigned long hash_key = rxrpc_peer_hash_key(local, srx); peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key); - if (peer) { - _net("PEER %d {%pISp}", peer->debug_id, &peer->srx.transport); + if (peer) _leave(" = %p {u=%d}", peer, refcount_read(&peer->ref)); - } return peer; } @@ -371,8 +369,6 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, peer = candidate; } - _net("PEER %d {%pISp}", peer->debug_id, &peer->srx.transport); - _leave(" = %p {u=%d}", peer, refcount_read(&peer->ref)); return peer; } From patchwork Wed Nov 30 16:55:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27881 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038527wrr; Wed, 30 Nov 2022 08:59:49 -0800 (PST) X-Google-Smtp-Source: AA0mqf6TxQ7GHJmKiAb7G7+82m7LNre0C2lj/maZ3s6Z0LUcWnJkV4a1VnzKCFE2gXJtPy4A+pUn X-Received: by 2002:a63:5c02:0:b0:476:898c:ded5 with SMTP id q2-20020a635c02000000b00476898cded5mr38722215pgb.299.1669827589242; Wed, 30 Nov 2022 08:59:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827589; cv=none; d=google.com; s=arc-20160816; b=vXVY8IU9c9SsMopMNdpW+/kTLMavFf5lG1kFkAUGwNbzqP7q0w3LrTo+ieas+8ehe1 Bcxapy2+Dyhda4J5yYdskzCP1UcXAp1Zldpf+AC0BfNcsRmLKcppbn/Qt/MraaEEMnVp Oa7FM3LYErTuP1a6QQpzGeRdZvUYLMzVPOzsjBdUYf8jw89pF4xFTPCyBR+vrt8lDIT8 QtRxvxbxugCg//MqqX7nexEsLKQVe+ArJSJYPhHKFvZNfkqgAUoQw2PKZoxpabhZ7Q71 4BqDd6YNuTaIbdWqLQLpOAAOEF9F6F/jGRr3v/0KEOnvleilh/h60kqkBZBuAfKf1HAq cjWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=eG6QqN39A1wxyi1zZYg6LY3f0auotnE9gFgF71Yj0o0=; b=psuQyYgu05obkkY4EsjSyaPvNKlrPtgXDllBSWjuNhWxmawHfTN3bBRy1yvOmevA3x VJSIS1A0omwt8w2tXkusE2zOYZY6Y93OeEhGr1KZjtAgGI9yA7k7/e+S82DxM3RMfN1B ocAODem3dzYqkA1xho/wQrsPe1Co1WVkQ34vm39h3KWqDh6nXDl19L/mTrd7tUl5UXMl qjKGAB8yM3S0zACQRxF5oIyO58pQX+S9psuielMTgY+q+0d4m8paYQuuCRC58o/qv89K JpGyrxaIronK3kbtB//UbyIXxGJ1NvqS4oP2amj8ylW8atMsRp79h60jTTGpMpD35d83 PpSg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="X6B7/TzD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cm17-20020a056a00339100b0056b82fbafebsi2067754pfb.379.2022.11.30.08.59.35; Wed, 30 Nov 2022 08:59:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="X6B7/TzD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230400AbiK3Q5M (ORCPT + 99 others); Wed, 30 Nov 2022 11:57:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230393AbiK3Q43 (ORCPT ); Wed, 30 Nov 2022 11:56:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28CC48BD33 for ; Wed, 30 Nov 2022 08:55:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827326; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eG6QqN39A1wxyi1zZYg6LY3f0auotnE9gFgF71Yj0o0=; b=X6B7/TzDOsECsv/zPFIcJ3ojLxhc6MKHkVTqiwRq6bDrXu+EvrCGLtPp9vXkVInsPdLmj0 gYNi4pdemrGO/5WCFYUpH8n1LLdUTGI8BUNUnf1lar3CurgloXyuSLHykpQ8HJBZNA4Ic2 1HqSBmDCZhgq40G7xQ1XMmUdTS6aHwo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-621-NaGv5uZENWmpP5_M0-FvVg-1; Wed, 30 Nov 2022 11:55:22 -0500 X-MC-Unique: NaGv5uZENWmpP5_M0-FvVg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DD28D8339C5; Wed, 30 Nov 2022 16:55:21 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id EA9A22166B26; Wed, 30 Nov 2022 16:55:20 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 07/35] rxrpc: Drop rxrpc_conn_parameters from rxrpc_connection and rxrpc_bundle From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:55:18 +0000 Message-ID: <166982731818.621383.14821411053362690406.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941134225723894?= X-GMAIL-MSGID: =?utf-8?q?1750941134225723894?= Remove the rxrpc_conn_parameters struct from the rxrpc_connection and rxrpc_bundle structs and emplace the members directly. These are going to get filled in from the rxrpc_call struct in future. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 16 ++++++++++++-- net/rxrpc/call_accept.c | 6 +++-- net/rxrpc/call_event.c | 2 +- net/rxrpc/call_object.c | 6 +++-- net/rxrpc/conn_client.c | 53 +++++++++++++++++++++++++++------------------ net/rxrpc/conn_event.c | 26 +++++++++++----------- net/rxrpc/conn_object.c | 22 +++++++++---------- net/rxrpc/conn_service.c | 6 +++-- net/rxrpc/input.c | 4 ++- net/rxrpc/key.c | 2 +- net/rxrpc/output.c | 32 ++++++++++++++------------- net/rxrpc/proc.c | 6 +++-- net/rxrpc/rxkad.c | 54 +++++++++++++++++++++++----------------------- net/rxrpc/security.c | 4 ++- 14 files changed, 131 insertions(+), 108 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 36ece1efb1d4..7c48b0163032 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -403,12 +403,18 @@ enum rxrpc_conn_proto_state { * RxRPC client connection bundle. */ struct rxrpc_bundle { - struct rxrpc_conn_parameters params; + struct rxrpc_local *local; /* Representation of local endpoint */ + struct rxrpc_peer *peer; /* Remote endpoint */ + struct key *key; /* Security details */ refcount_t ref; atomic_t active; /* Number of active users */ unsigned int debug_id; + u32 security_level; /* Security level selected */ + u16 service_id; /* Service ID for this connection */ bool try_upgrade; /* True if the bundle is attempting upgrade */ bool alloc_conn; /* True if someone's getting a conn */ + bool exclusive; /* T if conn is exclusive */ + bool upgrade; /* T if service ID can be upgraded */ short alloc_error; /* Error from last conn allocation */ spinlock_t channel_lock; struct rb_node local_node; /* Node in local->client_conns */ @@ -424,7 +430,9 @@ struct rxrpc_bundle { */ struct rxrpc_connection { struct rxrpc_conn_proto proto; - struct rxrpc_conn_parameters params; + struct rxrpc_local *local; /* Representation of local endpoint */ + struct rxrpc_peer *peer; /* Remote endpoint */ + struct key *key; /* Security details */ refcount_t ref; struct rcu_head rcu; @@ -471,9 +479,13 @@ struct rxrpc_connection { atomic_t serial; /* packet serial number counter */ unsigned int hi_serial; /* highest serial number received */ u32 service_id; /* Service ID, possibly upgraded */ + u32 security_level; /* Security level selected */ u8 security_ix; /* security type */ u8 out_clientflag; /* RXRPC_CLIENT_INITIATED if we are client */ u8 bundle_shift; /* Index into bundle->avail_chans */ + bool exclusive; /* T if conn is exclusive */ + bool upgrade; /* T if service ID can be upgraded */ + u16 orig_service_id; /* Originally requested service ID */ short error; /* Local error code */ }; diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 48790ee77019..4888959e4727 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -305,8 +305,8 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, b->conn_backlog[conn_tail] = NULL; smp_store_release(&b->conn_backlog_tail, (conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); - conn->params.local = rxrpc_get_local(local); - conn->params.peer = peer; + conn->local = rxrpc_get_local(local); + conn->peer = peer; rxrpc_see_connection(conn); rxrpc_new_incoming_connection(rx, conn, sec, skb); } else { @@ -323,7 +323,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, call->conn = conn; call->security = conn->security; call->security_ix = conn->security_ix; - call->peer = rxrpc_get_peer(conn->params.peer); + call->peer = rxrpc_get_peer(conn->peer); call->cong_ssthresh = call->peer->cong_ssthresh; call->tx_last_sent = ktime_get_real(); return call; diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 349f3df569ba..b17ed37434bd 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -69,7 +69,7 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial, void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why) { - struct rxrpc_local *local = call->conn->params.local; + struct rxrpc_local *local = call->conn->local; struct rxrpc_txbuf *txb; if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index e36a317b2e9a..59928f0a8fe1 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -417,9 +417,9 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, conn->channels[chan].call_id = call->call_id; rcu_assign_pointer(conn->channels[chan].call, call); - spin_lock(&conn->params.peer->lock); - hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets); - spin_unlock(&conn->params.peer->lock); + spin_lock(&conn->peer->lock); + hlist_add_head_rcu(&call->error_link, &conn->peer->error_targets); + spin_unlock(&conn->peer->lock); rxrpc_start_call_timer(call); _leave(""); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 2b76fbffd4dd..71404b33623f 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -51,7 +51,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle); static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, gfp_t gfp) { - struct rxrpc_net *rxnet = conn->params.local->rxnet; + struct rxrpc_net *rxnet = conn->local->rxnet; int id; _enter(""); @@ -122,8 +122,13 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp, bundle = kzalloc(sizeof(*bundle), gfp); if (bundle) { - bundle->params = *cp; - rxrpc_get_peer(bundle->params.peer); + bundle->local = cp->local; + bundle->peer = rxrpc_get_peer(cp->peer); + bundle->key = cp->key; + bundle->exclusive = cp->exclusive; + bundle->upgrade = cp->upgrade; + bundle->service_id = cp->service_id; + bundle->security_level = cp->security_level; refcount_set(&bundle->ref, 1); atomic_set(&bundle->active, 1); spin_lock_init(&bundle->channel_lock); @@ -140,7 +145,7 @@ struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle) static void rxrpc_free_bundle(struct rxrpc_bundle *bundle) { - rxrpc_put_peer(bundle->params.peer); + rxrpc_put_peer(bundle->peer); kfree(bundle); } @@ -164,7 +169,7 @@ static struct rxrpc_connection * rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) { struct rxrpc_connection *conn; - struct rxrpc_net *rxnet = bundle->params.local->rxnet; + struct rxrpc_net *rxnet = bundle->local->rxnet; int ret; _enter(""); @@ -177,10 +182,16 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) refcount_set(&conn->ref, 1); conn->bundle = bundle; - conn->params = bundle->params; + conn->local = bundle->local; + conn->peer = bundle->peer; + conn->key = bundle->key; + conn->exclusive = bundle->exclusive; + conn->upgrade = bundle->upgrade; + conn->orig_service_id = bundle->service_id; + conn->security_level = bundle->security_level; conn->out_clientflag = RXRPC_CLIENT_INITIATED; conn->state = RXRPC_CONN_CLIENT; - conn->service_id = conn->params.service_id; + conn->service_id = conn->orig_service_id; ret = rxrpc_get_client_connection_id(conn, gfp); if (ret < 0) @@ -196,9 +207,9 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) write_unlock(&rxnet->conn_lock); rxrpc_get_bundle(bundle); - rxrpc_get_peer(conn->params.peer); - rxrpc_get_local(conn->params.local); - key_get(conn->params.key); + rxrpc_get_peer(conn->peer); + rxrpc_get_local(conn->local); + key_get(conn->key); trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client, refcount_read(&conn->ref), @@ -228,7 +239,7 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn) if (!conn) goto dont_reuse; - rxnet = conn->params.local->rxnet; + rxnet = conn->local->rxnet; if (test_bit(RXRPC_CONN_DONT_REUSE, &conn->flags)) goto dont_reuse; @@ -285,7 +296,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c while (p) { bundle = rb_entry(p, struct rxrpc_bundle, local_node); -#define cmp(X) ((long)bundle->params.X - (long)cp->X) +#define cmp(X) ((long)bundle->X - (long)cp->X) diff = (cmp(peer) ?: cmp(key) ?: cmp(security_level) ?: @@ -314,7 +325,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c parent = *pp; bundle = rb_entry(parent, struct rxrpc_bundle, local_node); -#define cmp(X) ((long)bundle->params.X - (long)cp->X) +#define cmp(X) ((long)bundle->X - (long)cp->X) diff = (cmp(peer) ?: cmp(key) ?: cmp(security_level) ?: @@ -532,7 +543,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, rxrpc_see_call(call); list_del_init(&call->chan_wait_link); - call->peer = rxrpc_get_peer(conn->params.peer); + call->peer = rxrpc_get_peer(conn->peer); call->conn = rxrpc_get_connection(conn); call->cid = conn->proto.cid | channel; call->call_id = call_id; @@ -569,7 +580,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, */ static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connection *conn) { - struct rxrpc_net *rxnet = bundle->params.local->rxnet; + struct rxrpc_net *rxnet = bundle->local->rxnet; bool drop_ref; if (!list_empty(&conn->cache_link)) { @@ -795,7 +806,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call { struct rxrpc_connection *conn; struct rxrpc_channel *chan = NULL; - struct rxrpc_net *rxnet = bundle->params.local->rxnet; + struct rxrpc_net *rxnet = bundle->local->rxnet; unsigned int channel; bool may_reuse; u32 cid; @@ -936,11 +947,11 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn) */ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) { - struct rxrpc_local *local = bundle->params.local; + struct rxrpc_local *local = bundle->local; bool need_put = false; if (atomic_dec_and_lock(&bundle->active, &local->client_bundles_lock)) { - if (!bundle->params.exclusive) { + if (!bundle->exclusive) { _debug("erase bundle"); rb_erase(&bundle->local_node, &local->client_bundles); need_put = true; @@ -957,7 +968,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) */ static void rxrpc_kill_client_conn(struct rxrpc_connection *conn) { - struct rxrpc_local *local = conn->params.local; + struct rxrpc_local *local = conn->local; struct rxrpc_net *rxnet = local->rxnet; _enter("C=%x", conn->debug_id); @@ -1036,7 +1047,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) expiry = rxrpc_conn_idle_client_expiry; if (nr_conns > rxrpc_reap_client_connections) expiry = rxrpc_conn_idle_client_fast_expiry; - if (conn->params.local->service_closed) + if (conn->local->service_closed) expiry = rxrpc_closed_conn_expiry * HZ; conn_expires_at = conn->idle_timestamp + expiry; @@ -1110,7 +1121,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local) list_for_each_entry_safe(conn, tmp, &rxnet->idle_client_conns, cache_link) { - if (conn->params.local == local) { + if (conn->local == local) { trace_rxrpc_client(conn, -1, rxrpc_client_discard); list_move(&conn->cache_link, &graveyard); } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index d5549cbfc71b..71ed6b9dc63a 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -52,8 +52,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, if (skb && call_id != sp->hdr.callNumber) return; - msg.msg_name = &conn->params.peer->srx.transport; - msg.msg_namelen = conn->params.peer->srx.transport_len; + msg.msg_name = &conn->peer->srx.transport; + msg.msg_namelen = conn->peer->srx.transport_len; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; @@ -86,8 +86,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, break; case RXRPC_PACKET_TYPE_ACK: - mtu = conn->params.peer->if_mtu; - mtu -= conn->params.peer->hdrsize; + mtu = conn->peer->if_mtu; + mtu -= conn->peer->hdrsize; pkt.ack.bufferSpace = 0; pkt.ack.maxSkew = htons(skb ? skb->priority : 0); pkt.ack.firstPacket = htonl(chan->last_seq + 1); @@ -131,8 +131,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, break; } - ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, ioc, len); - conn->params.peer->last_tx_at = ktime_get_seconds(); + ret = kernel_sendmsg(conn->local->socket, &msg, iov, ioc, len); + conn->peer->last_tx_at = ktime_get_seconds(); if (ret < 0) trace_rxrpc_tx_fail(chan->call_debug_id, serial, ret, rxrpc_tx_point_call_final_resend); @@ -211,8 +211,8 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); spin_unlock_bh(&conn->state_lock); - msg.msg_name = &conn->params.peer->srx.transport; - msg.msg_namelen = conn->params.peer->srx.transport_len; + msg.msg_name = &conn->peer->srx.transport; + msg.msg_namelen = conn->peer->srx.transport_len; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; @@ -241,7 +241,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial); whdr.serial = htonl(serial); - ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); + ret = kernel_sendmsg(conn->local->socket, &msg, iov, 2, len); if (ret < 0) { trace_rxrpc_tx_fail(conn->debug_id, serial, ret, rxrpc_tx_point_conn_abort); @@ -251,7 +251,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_conn_abort); - conn->params.peer->last_tx_at = ktime_get_seconds(); + conn->peer->last_tx_at = ktime_get_seconds(); _leave(" = 0"); return 0; @@ -330,7 +330,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, return ret; ret = conn->security->init_connection_security( - conn, conn->params.key->payload.data[0]); + conn, conn->key->payload.data[0]); if (ret < 0) return ret; @@ -484,9 +484,9 @@ void rxrpc_process_connection(struct work_struct *work) rxrpc_see_connection(conn); - if (__rxrpc_use_local(conn->params.local)) { + if (__rxrpc_use_local(conn->local)) { rxrpc_do_process_connection(conn); - rxrpc_unuse_local(conn->params.local); + rxrpc_unuse_local(conn->local); } rxrpc_put_connection(conn); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index d5d15389406f..ad6e5ee1f069 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -120,10 +120,10 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local, } if (conn->proto.epoch != k.epoch || - conn->params.local != local) + conn->local != local) goto not_found; - peer = conn->params.peer; + peer = conn->peer; switch (srx.transport.family) { case AF_INET: if (peer->srx.transport.sin.sin_port != @@ -231,7 +231,7 @@ void rxrpc_disconnect_call(struct rxrpc_call *call) */ void rxrpc_kill_connection(struct rxrpc_connection *conn) { - struct rxrpc_net *rxnet = conn->params.local->rxnet; + struct rxrpc_net *rxnet = conn->local->rxnet; ASSERT(!rcu_access_pointer(conn->channels[0].call) && !rcu_access_pointer(conn->channels[1].call) && @@ -340,7 +340,7 @@ void rxrpc_put_service_conn(struct rxrpc_connection *conn) __refcount_dec(&conn->ref, &r); trace_rxrpc_conn(debug_id, rxrpc_conn_put_service, r - 1, here); if (r - 1 == 1) - rxrpc_set_service_reap_timer(conn->params.local->rxnet, + rxrpc_set_service_reap_timer(conn->local->rxnet, jiffies + rxrpc_connection_expiry); } @@ -360,13 +360,13 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu) rxrpc_purge_queue(&conn->rx_queue); conn->security->clear(conn); - key_put(conn->params.key); + key_put(conn->key); rxrpc_put_bundle(conn->bundle); - rxrpc_put_peer(conn->params.peer); + rxrpc_put_peer(conn->peer); - if (atomic_dec_and_test(&conn->params.local->rxnet->nr_conns)) - wake_up_var(&conn->params.local->rxnet->nr_conns); - rxrpc_put_local(conn->params.local); + if (atomic_dec_and_test(&conn->local->rxnet->nr_conns)) + wake_up_var(&conn->local->rxnet->nr_conns); + rxrpc_put_local(conn->local); kfree(conn); _leave(""); @@ -397,10 +397,10 @@ void rxrpc_service_connection_reaper(struct work_struct *work) if (conn->state == RXRPC_CONN_SERVICE_PREALLOC) continue; - if (rxnet->live && !conn->params.local->dead) { + if (rxnet->live && !conn->local->dead) { idle_timestamp = READ_ONCE(conn->idle_timestamp); expire_at = idle_timestamp + rxrpc_connection_expiry * HZ; - if (conn->params.local->service_closed) + if (conn->local->service_closed) expire_at = idle_timestamp + rxrpc_closed_conn_expiry * HZ; _debug("reap CONN %d { u=%d,t=%ld }", diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index 75f903099eb0..a3b91864ef21 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -164,7 +164,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, conn->proto.epoch = sp->hdr.epoch; conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK; - conn->params.service_id = sp->hdr.serviceId; + conn->orig_service_id = sp->hdr.serviceId; conn->service_id = sp->hdr.serviceId; conn->security_ix = sp->hdr.securityIndex; conn->out_clientflag = 0; @@ -183,7 +183,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, conn->service_id = rx->service_upgrade.to; /* Make the connection a target for incoming packets. */ - rxrpc_publish_service_conn(conn->params.peer, conn); + rxrpc_publish_service_conn(conn->peer, conn); } /* @@ -192,7 +192,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, */ void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn) { - struct rxrpc_peer *peer = conn->params.peer; + struct rxrpc_peer *peer = conn->peer; write_seqlock_bh(&peer->service_conn_lock); if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags)) diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index e2461f29d765..44caf88e04b8 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1339,10 +1339,10 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) if (!test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags)) goto reupgrade; - old_id = cmpxchg(&conn->service_id, conn->params.service_id, + old_id = cmpxchg(&conn->service_id, conn->orig_service_id, sp->hdr.serviceId); - if (old_id != conn->params.service_id && + if (old_id != conn->orig_service_id && old_id != sp->hdr.serviceId) goto reupgrade; } diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c index 8d2073e0e3da..1ecddcb3745a 100644 --- a/net/rxrpc/key.c +++ b/net/rxrpc/key.c @@ -513,7 +513,7 @@ int rxrpc_get_server_data_key(struct rxrpc_connection *conn, if (ret < 0) goto error; - conn->params.key = key; + conn->key = key; _leave(" = 0 [%d]", key_serial(key)); return 0; diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 635acf3dbd77..b5d8eac8c49c 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -142,8 +142,8 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, txb->ack.reason = RXRPC_ACK_IDLE; } - mtu = conn->params.peer->if_mtu; - mtu -= conn->params.peer->hdrsize; + mtu = conn->peer->if_mtu; + mtu -= conn->peer->hdrsize; jmax = rxrpc_rx_jumbo_max; qsize = (window - 1) - call->rx_consumed; rsize = max_t(int, call->rx_winsize - qsize, 0); @@ -259,7 +259,7 @@ static int rxrpc_send_ack_packet(struct rxrpc_local *local, struct rxrpc_txbuf * txb->ack.previousPacket = htonl(call->rx_highest_seq); iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len); - ret = do_udp_sendmsg(conn->params.local->socket, &msg, len); + ret = do_udp_sendmsg(conn->local->socket, &msg, len); call->peer->last_tx_at = ktime_get_seconds(); if (ret < 0) trace_rxrpc_tx_fail(call->debug_id, serial, ret, @@ -368,8 +368,8 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call) pkt.whdr.serial = htonl(serial); iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, sizeof(pkt)); - ret = do_udp_sendmsg(conn->params.local->socket, &msg, sizeof(pkt)); - conn->params.peer->last_tx_at = ktime_get_seconds(); + ret = do_udp_sendmsg(conn->local->socket, &msg, sizeof(pkt)); + conn->peer->last_tx_at = ktime_get_seconds(); if (ret < 0) trace_rxrpc_tx_fail(call->debug_id, serial, ret, rxrpc_tx_point_call_abort); @@ -473,7 +473,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (txb->len >= call->peer->maxdata) goto send_fragmentable; - down_read(&conn->params.local->defrag_sem); + down_read(&conn->local->defrag_sem); txb->last_sent = ktime_get_real(); if (txb->wire.flags & RXRPC_REQUEST_ACK) @@ -486,10 +486,10 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) * message and update the peer record */ rxrpc_inc_stat(call->rxnet, stat_tx_data_send); - ret = do_udp_sendmsg(conn->params.local->socket, &msg, len); - conn->params.peer->last_tx_at = ktime_get_seconds(); + ret = do_udp_sendmsg(conn->local->socket, &msg, len); + conn->peer->last_tx_at = ktime_get_seconds(); - up_read(&conn->params.local->defrag_sem); + up_read(&conn->local->defrag_sem); if (ret < 0) { rxrpc_cancel_rtt_probe(call, serial, rtt_slot); trace_rxrpc_tx_fail(call->debug_id, serial, ret, @@ -549,22 +549,22 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) /* attempt to send this message with fragmentation enabled */ _debug("send fragment"); - down_write(&conn->params.local->defrag_sem); + down_write(&conn->local->defrag_sem); txb->last_sent = ktime_get_real(); if (txb->wire.flags & RXRPC_REQUEST_ACK) rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data); - switch (conn->params.local->srx.transport.family) { + switch (conn->local->srx.transport.family) { case AF_INET6: case AF_INET: - ip_sock_set_mtu_discover(conn->params.local->socket->sk, + ip_sock_set_mtu_discover(conn->local->socket->sk, IP_PMTUDISC_DONT); rxrpc_inc_stat(call->rxnet, stat_tx_data_send_frag); - ret = do_udp_sendmsg(conn->params.local->socket, &msg, len); - conn->params.peer->last_tx_at = ktime_get_seconds(); + ret = do_udp_sendmsg(conn->local->socket, &msg, len); + conn->peer->last_tx_at = ktime_get_seconds(); - ip_sock_set_mtu_discover(conn->params.local->socket->sk, + ip_sock_set_mtu_discover(conn->local->socket->sk, IP_PMTUDISC_DO); break; @@ -582,7 +582,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } rxrpc_tx_backoff(call, ret); - up_write(&conn->params.local->defrag_sem); + up_write(&conn->local->defrag_sem); goto done; } diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index fae22a8b38d6..bb2edf6db896 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -172,9 +172,9 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) goto print; } - sprintf(lbuff, "%pISpc", &conn->params.local->srx.transport); + sprintf(lbuff, "%pISpc", &conn->local->srx.transport); - sprintf(rbuff, "%pISpc", &conn->params.peer->srx.transport); + sprintf(rbuff, "%pISpc", &conn->peer->srx.transport); print: seq_printf(seq, "UDP %-47.47s %-47.47s %4x %08x %s %3u" @@ -186,7 +186,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) rxrpc_conn_is_service(conn) ? "Svc" : "Clt", refcount_read(&conn->ref), rxrpc_conn_states[conn->state], - key_serial(conn->params.key), + key_serial(conn->key), atomic_read(&conn->serial), conn->hi_serial, conn->channels[0].call_id, diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index 36cf40442a7e..d1233720e05f 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -103,7 +103,7 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn, struct crypto_sync_skcipher *ci; int ret; - _enter("{%d},{%x}", conn->debug_id, key_serial(conn->params.key)); + _enter("{%d},{%x}", conn->debug_id, key_serial(conn->key)); conn->security_ix = token->security_index; @@ -118,7 +118,7 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn, sizeof(token->kad->session_key)) < 0) BUG(); - switch (conn->params.security_level) { + switch (conn->security_level) { case RXRPC_SECURITY_PLAIN: case RXRPC_SECURITY_AUTH: case RXRPC_SECURITY_ENCRYPT: @@ -150,7 +150,7 @@ static int rxkad_how_much_data(struct rxrpc_call *call, size_t remain, { size_t shdr, buf_size, chunk; - switch (call->conn->params.security_level) { + switch (call->conn->security_level) { default: buf_size = chunk = min_t(size_t, remain, RXRPC_JUMBO_DATALEN); shdr = 0; @@ -192,7 +192,7 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn, _enter(""); - if (!conn->params.key) + if (!conn->key) return 0; tmpbuf = kmalloc(tmpsize, GFP_KERNEL); @@ -205,7 +205,7 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn, return -ENOMEM; } - token = conn->params.key->payload.data[0]; + token = conn->key->payload.data[0]; memcpy(&iv, token->kad->session_key, sizeof(iv)); tmpbuf[0] = htonl(conn->proto.epoch); @@ -317,7 +317,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, } /* encrypt from the session key */ - token = call->conn->params.key->payload.data[0]; + token = call->conn->key->payload.data[0]; memcpy(&iv, token->kad->session_key, sizeof(iv)); sg_init_one(&sg, txb->data, txb->len); @@ -344,13 +344,13 @@ static int rxkad_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) int ret; _enter("{%d{%x}},{#%u},%u,", - call->debug_id, key_serial(call->conn->params.key), + call->debug_id, key_serial(call->conn->key), txb->seq, txb->len); if (!call->conn->rxkad.cipher) return 0; - ret = key_validate(call->conn->params.key); + ret = key_validate(call->conn->key); if (ret < 0) return ret; @@ -380,7 +380,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) y = 1; /* zero checksums are not permitted */ txb->wire.cksum = htons(y); - switch (call->conn->params.security_level) { + switch (call->conn->security_level) { case RXRPC_SECURITY_PLAIN: ret = 0; break; @@ -525,7 +525,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb, } /* decrypt from the session key */ - token = call->conn->params.key->payload.data[0]; + token = call->conn->key->payload.data[0]; memcpy(&iv, token->kad->session_key, sizeof(iv)); skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); @@ -596,7 +596,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb) u32 x, y; _enter("{%d{%x}},{#%u}", - call->debug_id, key_serial(call->conn->params.key), seq); + call->debug_id, key_serial(call->conn->key), seq); if (!call->conn->rxkad.cipher) return 0; @@ -632,7 +632,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb) goto protocol_error; } - switch (call->conn->params.security_level) { + switch (call->conn->security_level) { case RXRPC_SECURITY_PLAIN: ret = 0; break; @@ -678,8 +678,8 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn) challenge.min_level = htonl(0); challenge.__padding = 0; - msg.msg_name = &conn->params.peer->srx.transport; - msg.msg_namelen = conn->params.peer->srx.transport_len; + msg.msg_name = &conn->peer->srx.transport; + msg.msg_namelen = conn->peer->srx.transport_len; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; @@ -705,14 +705,14 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn) serial = atomic_inc_return(&conn->serial); whdr.serial = htonl(serial); - ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); + ret = kernel_sendmsg(conn->local->socket, &msg, iov, 2, len); if (ret < 0) { trace_rxrpc_tx_fail(conn->debug_id, serial, ret, rxrpc_tx_point_rxkad_challenge); return -EAGAIN; } - conn->params.peer->last_tx_at = ktime_get_seconds(); + conn->peer->last_tx_at = ktime_get_seconds(); trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_rxkad_challenge); _leave(" = 0"); @@ -736,8 +736,8 @@ static int rxkad_send_response(struct rxrpc_connection *conn, _enter(""); - msg.msg_name = &conn->params.peer->srx.transport; - msg.msg_namelen = conn->params.peer->srx.transport_len; + msg.msg_name = &conn->peer->srx.transport; + msg.msg_namelen = conn->peer->srx.transport_len; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; @@ -762,14 +762,14 @@ static int rxkad_send_response(struct rxrpc_connection *conn, serial = atomic_inc_return(&conn->serial); whdr.serial = htonl(serial); - ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 3, len); + ret = kernel_sendmsg(conn->local->socket, &msg, iov, 3, len); if (ret < 0) { trace_rxrpc_tx_fail(conn->debug_id, serial, ret, rxrpc_tx_point_rxkad_response); return -EAGAIN; } - conn->params.peer->last_tx_at = ktime_get_seconds(); + conn->peer->last_tx_at = ktime_get_seconds(); _leave(" = 0"); return 0; } @@ -832,15 +832,15 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, u32 version, nonce, min_level, abort_code; int ret; - _enter("{%d,%x}", conn->debug_id, key_serial(conn->params.key)); + _enter("{%d,%x}", conn->debug_id, key_serial(conn->key)); eproto = tracepoint_string("chall_no_key"); abort_code = RX_PROTOCOL_ERROR; - if (!conn->params.key) + if (!conn->key) goto protocol_error; abort_code = RXKADEXPIRED; - ret = key_validate(conn->params.key); + ret = key_validate(conn->key); if (ret < 0) goto other_error; @@ -863,10 +863,10 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, abort_code = RXKADLEVELFAIL; ret = -EACCES; - if (conn->params.security_level < min_level) + if (conn->security_level < min_level) goto other_error; - token = conn->params.key->payload.data[0]; + token = conn->key->payload.data[0]; /* build the response packet */ resp = kzalloc(sizeof(struct rxkad_response), GFP_NOFS); @@ -878,7 +878,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, resp->encrypted.cid = htonl(conn->proto.cid); resp->encrypted.securityIndex = htonl(conn->security_ix); resp->encrypted.inc_nonce = htonl(nonce + 1); - resp->encrypted.level = htonl(conn->params.security_level); + resp->encrypted.level = htonl(conn->security_level); resp->kvno = htonl(token->kad->kvno); resp->ticket_len = htonl(token->kad->ticket_len); resp->encrypted.call_id[0] = htonl(conn->channels[0].call_counter); @@ -1226,7 +1226,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, level = ntohl(response->encrypted.level); if (level > RXRPC_SECURITY_ENCRYPT) goto protocol_error_free; - conn->params.security_level = level; + conn->security_level = level; /* create a key to hold the security data and expiration time - after * this the connection security can be handled in exactly the same way diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c index 50cb5f1ee0c0..e6ddac9b3732 100644 --- a/net/rxrpc/security.c +++ b/net/rxrpc/security.c @@ -69,7 +69,7 @@ int rxrpc_init_client_conn_security(struct rxrpc_connection *conn) { const struct rxrpc_security *sec; struct rxrpc_key_token *token; - struct key *key = conn->params.key; + struct key *key = conn->key; int ret; _enter("{%d},{%x}", conn->debug_id, key_serial(key)); @@ -163,7 +163,7 @@ struct key *rxrpc_look_up_server_security(struct rxrpc_connection *conn, rcu_read_lock(); - rx = rcu_dereference(conn->params.local->service); + rx = rcu_dereference(conn->local->service); if (!rx) goto out; From patchwork Wed Nov 30 16:55:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27883 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038612wrr; Wed, 30 Nov 2022 09:00:01 -0800 (PST) X-Google-Smtp-Source: AA0mqf6O7ppUz/2CWLK3Dpb+awdp7IH9L+HzwFEdVCrkbiNRz47rebzpJw/WuFKVP17gt5cBOYE/ X-Received: by 2002:a17:906:4bcc:b0:7be:6ab8:4ccc with SMTP id x12-20020a1709064bcc00b007be6ab84cccmr17004717ejv.713.1669827601642; Wed, 30 Nov 2022 09:00:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827601; cv=none; d=google.com; s=arc-20160816; b=xMWIAQ6DQ0GqbpvlIa6Vyfv5jrIyk2ZFWfvJG5kS2dLnu5pX8M49TgjVEIw8DWITsA VQiAEcsbtNi+FGeL6ewirwWXfpdqGg1NRuafEarqvhXWPxLZRqxFniFcfVYdKVefqI+J hbxSrEpj8qxwvzF11tmmxqMHkaFhzkz5yt1sFQKJ2QYyfvo7Z8YO349Gprim0XMAyoMU 09NX7NQKin/AKyt7bEEl0vOHJJjuz+QDMuIyRHkoY1aTDBbWlMj37DzZO181Y60mjOtk CDmf3gI09wyr4UBHtMwIkeT9ge3Cr9fR0E43Mv5sOzxC1fxJiebd8Bmg/uk6kCKhH9zo o5rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=RWE246jOHYV/PB81lSut/TryVZE+bm1/17r8oeSbAPg=; b=sJWu/ZH1mh7nmAdrrPQXMu0aZcZL5nX3DjBjy0d0FqW9zHsfWjgq4HwANrCFt1JTUY u+/a1cti1N7ORG7rWGxQd0OdbhYfieeGz/c0taglA4N6a8we3vGMY3sb7rxM4qLs/aR4 L1/v6weTb2Lto6jS+/pCPXk7E1nZOG2AFcKpGZ/UFar1pmk80SVf4LNoF89MZzkfZkSd kfoyBMc8tBHqWvd9TMl6I/5gMka4gSe1HAIjt+66MboDbYYZo+SYprTv1Mwmc/3BMLYw MoFRxkfG+1PoWmp0RMWZUCB8YkiEBnhyVU4YNor8XN6PQw3jcQBU1hEcS4Hydohb8l3L OeRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TSKUnyZy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hd14-20020a170907968e00b00730936657d1si2168585ejc.552.2022.11.30.08.59.37; Wed, 30 Nov 2022 09:00:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TSKUnyZy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230416AbiK3Q5Q (ORCPT + 99 others); Wed, 30 Nov 2022 11:57:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230399AbiK3Q4i (ORCPT ); Wed, 30 Nov 2022 11:56:38 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B3848DFC6 for ; Wed, 30 Nov 2022 08:55:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827336; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RWE246jOHYV/PB81lSut/TryVZE+bm1/17r8oeSbAPg=; b=TSKUnyZyf7+uYrY7lZwMXV7S0KHYiIxXnI6cUj/Ox3rIaFPydWQ9TLC5j+toJxsLCfv4qx tRQp0Co4iE+dbm2RZIQOyzAdpWHqWvDtgWCvZOtLltrDiN5aTUeKPlXUbUyAq2L2zVT3wN FHPTlop8DD12q8DvAtFPr6UCEzMx19Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-587-AFseAkvMMrKd6hkgXIy7sQ-1; Wed, 30 Nov 2022 11:55:30 -0500 X-MC-Unique: AFseAkvMMrKd6hkgXIy7sQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7932510115F5; Wed, 30 Nov 2022 16:55:30 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id CF3862024CC1; Wed, 30 Nov 2022 16:55:29 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 08/35] rxrpc: Extract the code from a received ABORT packet much earlier From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:55:27 +0000 Message-ID: <166982732705.621383.6027273077124902248.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941147357514136?= X-GMAIL-MSGID: =?utf-8?q?1750941147357514136?= Extract the code from a received rx ABORT packet much earlier and in a single place and harmonise the responses to malformed ABORT packets. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/conn_event.c | 12 +----------- net/rxrpc/input.c | 31 +++++++++++++++++++------------ 2 files changed, 20 insertions(+), 23 deletions(-) diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 71ed6b9dc63a..f890a30c4df6 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -282,8 +282,6 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, u32 *_abort_code) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - __be32 wtmp; - u32 abort_code; int loop, ret; if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { @@ -305,16 +303,8 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, return 0; case RXRPC_PACKET_TYPE_ABORT: - if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), - &wtmp, sizeof(wtmp)) < 0) { - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, - tracepoint_string("bad_abort")); - return -EPROTO; - } - abort_code = ntohl(wtmp); - conn->error = -ECONNABORTED; - conn->abort_code = abort_code; + conn->abort_code = skb->priority; conn->state = RXRPC_CONN_REMOTELY_ABORTED; set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 44caf88e04b8..42c8257158f7 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1019,20 +1019,11 @@ static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb) static void rxrpc_input_abort(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - __be32 wtmp; - u32 abort_code = RX_CALL_DEAD; - - _enter(""); - - if (skb->len >= 4 && - skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), - &wtmp, sizeof(wtmp)) >= 0) - abort_code = ntohl(wtmp); - trace_rxrpc_rx_abort(call, sp->hdr.serial, abort_code); + trace_rxrpc_rx_abort(call, sp->hdr.serial, skb->priority); rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, - abort_code, -ECONNABORTED); + skb->priority, -ECONNABORTED); } /* @@ -1193,6 +1184,20 @@ int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb) return 0; } +/* + * Extract the abort code from an ABORT packet and stash it in skb->priority. + */ +static bool rxrpc_extract_abort(struct sk_buff *skb) +{ + __be32 wtmp; + + if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), + &wtmp, sizeof(wtmp)) < 0) + return false; + skb->priority = ntohl(wtmp); + return true; +} + /* * handle data received on the local endpoint * - may be called in interrupt context @@ -1264,8 +1269,10 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) case RXRPC_PACKET_TYPE_ACKALL: if (sp->hdr.callNumber == 0) goto bad_message; - fallthrough; + break; case RXRPC_PACKET_TYPE_ABORT: + if (!rxrpc_extract_abort(skb)) + return true; /* Just discard if malformed */ break; case RXRPC_PACKET_TYPE_DATA: From patchwork Wed Nov 30 16:55:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27882 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038574wrr; Wed, 30 Nov 2022 08:59:55 -0800 (PST) X-Google-Smtp-Source: AA0mqf7ezK/ZqKCVGrvIDUaNpwogMDhWrghY9e4cL3r6sHmbPdgmgSksTU+jzQHm1/ra/4VKZ9G9 X-Received: by 2002:a63:2163:0:b0:474:d6fa:f574 with SMTP id s35-20020a632163000000b00474d6faf574mr40808055pgm.190.1669827594872; Wed, 30 Nov 2022 08:59:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827594; cv=none; d=google.com; s=arc-20160816; b=i/Q7i+p/FiYMfFHXzzlNZk8cTVK1IhSzC0uMalcUCO7jDaWODXIIfWtmMAq6aReuPH 0Jlg0DYkN+63odpjD9Y2pttFz4ZHe0vsQZYELklqyVoPQKznHdKM7xMTQbORXKnkeMnr RPm2ZHfHoFg7zMUz/Q7ew/yohc52CMAFsNIfPq7tmbuQSnAAoAUnj28//5OZABTKVCoj d98SykIj9SH6io0O6oDtCASEA4VlD/IXI2juRHBDU0P0+Ukt3muIY5Ep6if1pLKFVfAT nnvJI4O3xmgZ1KWGZfjxaYRdb3wMgi4h66I2X0A3ErqLnv/4eiZS4r74lU3qOW7f1sGE +xeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=GRsrjTWsdHtPIVIzz7RIyloR2kSQnVsq53ws+OkgYXY=; b=P0x0jbJpiBSH/1ZVVaOBrhJb/wsuwh7ie+3vHQ5o/gbJs+ZvvYw3UpFgjiU0HLWVlM O983G3HujDDhvuNcyMVOPIrLWikb8CFYSnLtcBqFFBHfcQWfN3rh0bmMlB8IDQ9dKM3W lz9zDos0Ab+jL7cyj/NsRnX1zfxszxZpqPJyvpNHRK5Z4Vlv43VlY7YkPLiJSUuk7jU5 5c+fqGLr5QwYmVVSeVCLLOSIkoYgyQaPIuKET8PBprZUzCfpe5GuWJi8JHL0pnWg5Vyk CrbCpWue2vp6HBOIJQdx2GkIoaioBitc/FOKB0Biri0TJcs/wSsNPWrl5gCjaijFDYsE 8mSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=L6elMDbN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k7-20020a170902ce0700b00186b287bd57si1711064plg.190.2022.11.30.08.59.41; Wed, 30 Nov 2022 08:59:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=L6elMDbN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229915AbiK3Q5h (ORCPT + 99 others); Wed, 30 Nov 2022 11:57:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229971AbiK3Q4o (ORCPT ); Wed, 30 Nov 2022 11:56:44 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C253C8FD5E for ; Wed, 30 Nov 2022 08:55:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827346; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GRsrjTWsdHtPIVIzz7RIyloR2kSQnVsq53ws+OkgYXY=; b=L6elMDbNmmMqURtnLA2kUHXdocLF4E1i8owt7F2B/5QFAXkVy7k6X+gwsYxhWTYZpNTaoO RVZoXCUyMN03dr70UueKDXI+tlYX4JTdzFmUgxsGZnnqUeYVunCWlVdZiyfdb51Q67slXh 2p+3Ndlg0jGa6Pgc5igwMLd9DF9N5Gs= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-569-IWMc6HehN2OtJHwJyhodmw-1; Wed, 30 Nov 2022 11:55:39 -0500 X-MC-Unique: IWMc6HehN2OtJHwJyhodmw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 201641C0CE62; Wed, 30 Nov 2022 16:55:39 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 46283492B04; Wed, 30 Nov 2022 16:55:38 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 09/35] rxrpc: trace: Don't use __builtin_return_address for rxrpc_local tracing From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:55:35 +0000 Message-ID: <166982733564.621383.13364100665781176323.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941140364909772?= X-GMAIL-MSGID: =?utf-8?q?1750941140364909772?= In rxrpc tracing, use enums to generate lists of points of interest rather than __builtin_return_address() for the rxrpc_local tracepoint Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 49 +++++++++++++++++++------- net/rxrpc/af_rxrpc.c | 8 ++-- net/rxrpc/ar-internal.h | 41 +++++++++++++++++----- net/rxrpc/call_accept.c | 4 +- net/rxrpc/call_event.c | 2 + net/rxrpc/conn_client.c | 2 + net/rxrpc/conn_event.c | 4 +- net/rxrpc/conn_object.c | 2 + net/rxrpc/input.c | 4 +- net/rxrpc/local_object.c | 78 +++++++++++++++++++++++------------------- net/rxrpc/output.c | 3 +- net/rxrpc/peer_event.c | 4 +- net/rxrpc/peer_object.c | 4 +- 13 files changed, 129 insertions(+), 76 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 2b77f9a75bf7..015569845b1d 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -32,12 +32,35 @@ E_(rxrpc_skb_unshared_nomem, "US0") #define rxrpc_local_traces \ - EM(rxrpc_local_got, "GOT") \ - EM(rxrpc_local_new, "NEW") \ - EM(rxrpc_local_processing, "PRO") \ - EM(rxrpc_local_put, "PUT") \ - EM(rxrpc_local_queued, "QUE") \ - E_(rxrpc_local_tx_ack, "TAK") + EM(rxrpc_local_free, "FREE ") \ + EM(rxrpc_local_get_client_conn, "GET conn-cln") \ + EM(rxrpc_local_get_for_use, "GET for-use ") \ + EM(rxrpc_local_get_peer, "GET peer ") \ + EM(rxrpc_local_get_prealloc_conn, "GET conn-pre") \ + EM(rxrpc_local_get_queue, "GET queue ") \ + EM(rxrpc_local_new, "NEW ") \ + EM(rxrpc_local_processing, "PROCESSING ") \ + EM(rxrpc_local_put_already_queued, "PUT alreadyq") \ + EM(rxrpc_local_put_bind, "PUT bind ") \ + EM(rxrpc_local_put_for_use, "PUT for-use ") \ + EM(rxrpc_local_put_kill_conn, "PUT conn-kil") \ + EM(rxrpc_local_put_peer, "PUT peer ") \ + EM(rxrpc_local_put_prealloc_conn, "PUT conn-pre") \ + EM(rxrpc_local_put_release_sock, "PUT rel-sock") \ + EM(rxrpc_local_put_queue, "PUT queue ") \ + EM(rxrpc_local_queued, "QUEUED ") \ + EM(rxrpc_local_see_tx_ack, "SEE tx-ack ") \ + EM(rxrpc_local_stop, "STOP ") \ + EM(rxrpc_local_stopped, "STOPPED ") \ + EM(rxrpc_local_unuse_bind, "UNU bind ") \ + EM(rxrpc_local_unuse_conn_work, "UNU conn-wrk") \ + EM(rxrpc_local_unuse_peer_keepalive, "UNU peer-kpa") \ + EM(rxrpc_local_unuse_release_sock, "UNU rel-sock") \ + EM(rxrpc_local_unuse_work, "UNU work ") \ + EM(rxrpc_local_use_conn_work, "USE conn-wrk") \ + EM(rxrpc_local_use_lookup, "USE lookup ") \ + EM(rxrpc_local_use_peer_keepalive, "USE peer-kpa") \ + E_(rxrpc_local_use_work, "USE work ") #define rxrpc_peer_traces \ EM(rxrpc_peer_got, "GOT") \ @@ -345,29 +368,29 @@ rxrpc_txqueue_traces; TRACE_EVENT(rxrpc_local, TP_PROTO(unsigned int local_debug_id, enum rxrpc_local_trace op, - int usage, const void *where), + int ref, int usage), - TP_ARGS(local_debug_id, op, usage, where), + TP_ARGS(local_debug_id, op, ref, usage), TP_STRUCT__entry( __field(unsigned int, local ) __field(int, op ) + __field(int, ref ) __field(int, usage ) - __field(const void *, where ) ), TP_fast_assign( __entry->local = local_debug_id; __entry->op = op; + __entry->ref = ref; __entry->usage = usage; - __entry->where = where; ), - TP_printk("L=%08x %s u=%d sp=%pSR", + TP_printk("L=%08x %s r=%d u=%d", __entry->local, __print_symbolic(__entry->op, rxrpc_local_traces), - __entry->usage, - __entry->where) + __entry->ref, + __entry->usage) ); TRACE_EVENT(rxrpc_peer, diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index aacdd96a9886..989ebca899f3 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -194,8 +194,8 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len) service_in_use: write_unlock(&local->services_lock); - rxrpc_unuse_local(local); - rxrpc_put_local(local); + rxrpc_unuse_local(local, rxrpc_local_unuse_bind); + rxrpc_put_local(local, rxrpc_local_put_bind); ret = -EADDRINUSE; error_unlock: release_sock(&rx->sk); @@ -888,8 +888,8 @@ static int rxrpc_release_sock(struct sock *sk) flush_workqueue(rxrpc_workqueue); rxrpc_purge_queue(&sk->sk_receive_queue); - rxrpc_unuse_local(rx->local); - rxrpc_put_local(rx->local); + rxrpc_unuse_local(rx->local, rxrpc_local_unuse_release_sock); + rxrpc_put_local(rx->local, rxrpc_local_put_release_sock); rx->local = NULL; key_put(rx->key); rx->key = NULL; diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 7c48b0163032..dde9ce21ef48 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -979,22 +979,45 @@ extern void rxrpc_process_local_events(struct rxrpc_local *); * local_object.c */ struct rxrpc_local *rxrpc_lookup_local(struct net *, const struct sockaddr_rxrpc *); -struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *); -struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *); -void rxrpc_put_local(struct rxrpc_local *); -struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *); -void rxrpc_unuse_local(struct rxrpc_local *); +struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *, enum rxrpc_local_trace); +struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *, enum rxrpc_local_trace); +void rxrpc_put_local(struct rxrpc_local *, enum rxrpc_local_trace); +struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *, enum rxrpc_local_trace); +void rxrpc_unuse_local(struct rxrpc_local *, enum rxrpc_local_trace); void rxrpc_queue_local(struct rxrpc_local *); void rxrpc_destroy_all_locals(struct rxrpc_net *); -static inline bool __rxrpc_unuse_local(struct rxrpc_local *local) +static inline bool __rxrpc_unuse_local(struct rxrpc_local *local, + enum rxrpc_local_trace why) { - return atomic_dec_return(&local->active_users) == 0; + unsigned int debug_id = local->debug_id; + int r, u; + + r = refcount_read(&local->ref); + u = atomic_dec_return(&local->active_users); + trace_rxrpc_local(debug_id, why, r, u); + return u == 0; +} + +static inline bool __rxrpc_use_local(struct rxrpc_local *local, + enum rxrpc_local_trace why) +{ + int r, u; + + r = refcount_read(&local->ref); + u = atomic_fetch_add_unless(&local->active_users, 1, 0); + trace_rxrpc_local(local->debug_id, why, r, u); + return u != 0; } -static inline bool __rxrpc_use_local(struct rxrpc_local *local) +static inline void rxrpc_see_local(struct rxrpc_local *local, + enum rxrpc_local_trace why) { - return atomic_fetch_add_unless(&local->active_users, 1, 0) != 0; + int r, u; + + r = refcount_read(&local->ref); + u = atomic_read(&local->active_users); + trace_rxrpc_local(local->debug_id, why, r, u); } /* diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 4888959e4727..1b12d4e28373 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -197,7 +197,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx) tail = b->peer_backlog_tail; while (CIRC_CNT(head, tail, size) > 0) { struct rxrpc_peer *peer = b->peer_backlog[tail]; - rxrpc_put_local(peer->local); + rxrpc_put_local(peer->local, rxrpc_local_put_prealloc_conn); kfree(peer); tail = (tail + 1) & (size - 1); } @@ -305,7 +305,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, b->conn_backlog[conn_tail] = NULL; smp_store_release(&b->conn_backlog_tail, (conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); - conn->local = rxrpc_get_local(local); + conn->local = rxrpc_get_local(local, rxrpc_local_get_prealloc_conn); conn->peer = peer; rxrpc_see_connection(conn); rxrpc_new_incoming_connection(rx, conn, sec, skb); diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index b17ed37434bd..591af8e2e3d0 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -114,7 +114,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, if (in_task()) { rxrpc_transmit_ack_packets(call->peer->local); } else { - rxrpc_get_local(local); + rxrpc_get_local(local, rxrpc_local_get_queue); rxrpc_queue_local(local); } } diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 71404b33623f..9a69b4c1b182 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -208,7 +208,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) rxrpc_get_bundle(bundle); rxrpc_get_peer(conn->peer); - rxrpc_get_local(conn->local); + rxrpc_get_local(conn->local, rxrpc_local_get_client_conn); key_get(conn->key); trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client, diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index f890a30c4df6..225edaf019f1 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -474,9 +474,9 @@ void rxrpc_process_connection(struct work_struct *work) rxrpc_see_connection(conn); - if (__rxrpc_use_local(conn->local)) { + if (__rxrpc_use_local(conn->local, rxrpc_local_use_conn_work)) { rxrpc_do_process_connection(conn); - rxrpc_unuse_local(conn->local); + rxrpc_unuse_local(conn->local, rxrpc_local_unuse_conn_work); } rxrpc_put_connection(conn); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index ad6e5ee1f069..725359afeac0 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -366,7 +366,7 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu) if (atomic_dec_and_test(&conn->local->rxnet->nr_conns)) wake_up_var(&conn->local->rxnet->nr_conns); - rxrpc_put_local(conn->local); + rxrpc_put_local(conn->local, rxrpc_local_put_kill_conn); kfree(conn); _leave(""); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 42c8257158f7..cecfd201d832 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1133,7 +1133,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local, { _enter("%p,%p", local, skb); - if (rxrpc_get_local_maybe(local)) { + if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { skb_queue_tail(&local->event_queue, skb); rxrpc_queue_local(local); } else { @@ -1146,7 +1146,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local, */ static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) { - if (rxrpc_get_local_maybe(local)) { + if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { skb_queue_tail(&local->reject_queue, skb); rxrpc_queue_local(local); } else { diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 11080c335d42..1617ce651b9b 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -110,7 +110,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, local->debug_id = atomic_inc_return(&rxrpc_debug_id); memcpy(&local->srx, srx, sizeof(*srx)); local->srx.srx_service = 0; - trace_rxrpc_local(local->debug_id, rxrpc_local_new, 1, NULL); + trace_rxrpc_local(local->debug_id, rxrpc_local_new, 1, 1); } _leave(" = %p", local); @@ -228,7 +228,7 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, * we're attempting to use a local address that the dying * object is still using. */ - if (!rxrpc_use_local(local)) + if (!rxrpc_use_local(local, rxrpc_local_use_lookup)) break; goto found; @@ -272,32 +272,32 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, /* * Get a ref on a local endpoint. */ -struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *local) +struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *local, + enum rxrpc_local_trace why) { - const void *here = __builtin_return_address(0); - int r; + int r, u; + u = atomic_read(&local->active_users); __refcount_inc(&local->ref, &r); - trace_rxrpc_local(local->debug_id, rxrpc_local_got, r + 1, here); + trace_rxrpc_local(local->debug_id, why, r + 1, u); return local; } /* * Get a ref on a local endpoint unless its usage has already reached 0. */ -struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local) +struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local, + enum rxrpc_local_trace why) { - const void *here = __builtin_return_address(0); - int r; + int r, u; - if (local) { - if (__refcount_inc_not_zero(&local->ref, &r)) - trace_rxrpc_local(local->debug_id, rxrpc_local_got, - r + 1, here); - else - local = NULL; + if (local && __refcount_inc_not_zero(&local->ref, &r)) { + u = atomic_read(&local->active_users); + trace_rxrpc_local(local->debug_id, why, r + 1, u); + return local; } - return local; + + return NULL; } /* @@ -305,31 +305,31 @@ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local) */ void rxrpc_queue_local(struct rxrpc_local *local) { - const void *here = __builtin_return_address(0); unsigned int debug_id = local->debug_id; int r = refcount_read(&local->ref); + int u = atomic_read(&local->active_users); if (rxrpc_queue_work(&local->processor)) - trace_rxrpc_local(debug_id, rxrpc_local_queued, r + 1, here); + trace_rxrpc_local(debug_id, rxrpc_local_queued, r, u); else - rxrpc_put_local(local); + rxrpc_put_local(local, rxrpc_local_put_already_queued); } /* * Drop a ref on a local endpoint. */ -void rxrpc_put_local(struct rxrpc_local *local) +void rxrpc_put_local(struct rxrpc_local *local, enum rxrpc_local_trace why) { - const void *here = __builtin_return_address(0); unsigned int debug_id; bool dead; - int r; + int r, u; if (local) { debug_id = local->debug_id; + u = atomic_read(&local->active_users); dead = __refcount_dec_and_test(&local->ref, &r); - trace_rxrpc_local(debug_id, rxrpc_local_put, r, here); + trace_rxrpc_local(debug_id, why, r, u); if (dead) call_rcu(&local->rcu, rxrpc_local_rcu); @@ -339,14 +339,15 @@ void rxrpc_put_local(struct rxrpc_local *local) /* * Start using a local endpoint. */ -struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local) +struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local, + enum rxrpc_local_trace why) { - local = rxrpc_get_local_maybe(local); + local = rxrpc_get_local_maybe(local, rxrpc_local_get_for_use); if (!local) return NULL; - if (!__rxrpc_use_local(local)) { - rxrpc_put_local(local); + if (!__rxrpc_use_local(local, why)) { + rxrpc_put_local(local, rxrpc_local_put_for_use); return NULL; } @@ -357,11 +358,18 @@ struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local) * Cease using a local endpoint. Once the number of active users reaches 0, we * start the closure of the transport in the work processor. */ -void rxrpc_unuse_local(struct rxrpc_local *local) +void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why) { + unsigned int debug_id; + int r, u; + if (local) { - if (__rxrpc_unuse_local(local)) { - rxrpc_get_local(local); + debug_id = local->debug_id; + r = refcount_read(&local->ref); + u = atomic_dec_return(&local->active_users); + trace_rxrpc_local(debug_id, why, r, u); + if (u == 0) { + rxrpc_get_local(local, rxrpc_local_get_queue); rxrpc_queue_local(local); } } @@ -418,12 +426,11 @@ static void rxrpc_local_processor(struct work_struct *work) if (local->dead) return; - trace_rxrpc_local(local->debug_id, rxrpc_local_processing, - refcount_read(&local->ref), NULL); + rxrpc_see_local(local, rxrpc_local_processing); do { again = false; - if (!__rxrpc_use_local(local)) { + if (!__rxrpc_use_local(local, rxrpc_local_use_work)) { rxrpc_local_destroyer(local); break; } @@ -443,10 +450,10 @@ static void rxrpc_local_processor(struct work_struct *work) again = true; } - __rxrpc_unuse_local(local); + __rxrpc_unuse_local(local, rxrpc_local_unuse_work); } while (again); - rxrpc_put_local(local); + rxrpc_put_local(local, rxrpc_local_put_queue); } /* @@ -460,6 +467,7 @@ static void rxrpc_local_rcu(struct rcu_head *rcu) ASSERT(!work_pending(&local->processor)); + rxrpc_see_local(local, rxrpc_local_free); kfree(local); _leave(""); } diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index b5d8eac8c49c..2762b7ada9ae 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -288,8 +288,7 @@ void rxrpc_transmit_ack_packets(struct rxrpc_local *local) LIST_HEAD(queue); int ret; - trace_rxrpc_local(local->debug_id, rxrpc_local_tx_ack, - refcount_read(&local->ref), NULL); + rxrpc_see_local(local, rxrpc_local_see_tx_ack); if (list_empty(&local->ack_tx_queue)) return; diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index ad4d1769e02b..3f8d104ecaa7 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -266,7 +266,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, if (!rxrpc_get_peer_maybe(peer)) continue; - if (__rxrpc_use_local(peer->local)) { + if (__rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive)) { spin_unlock_bh(&rxnet->peer_hash_lock); keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME; @@ -289,7 +289,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, spin_lock_bh(&rxnet->peer_hash_lock); list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive[slot & mask]); - rxrpc_unuse_local(peer->local); + rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive); } rxrpc_put_peer_locked(peer); } diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c index b3c3c1c344fc..bcef897560e7 100644 --- a/net/rxrpc/peer_object.c +++ b/net/rxrpc/peer_object.c @@ -215,7 +215,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp) peer = kzalloc(sizeof(struct rxrpc_peer), gfp); if (peer) { refcount_set(&peer->ref, 1); - peer->local = rxrpc_get_local(local); + peer->local = rxrpc_get_local(local, rxrpc_local_get_peer); INIT_HLIST_HEAD(&peer->error_targets); peer->service_conns = RB_ROOT; seqlock_init(&peer->service_conn_lock); @@ -294,7 +294,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx, static void rxrpc_free_peer(struct rxrpc_peer *peer) { - rxrpc_put_local(peer->local); + rxrpc_put_local(peer->local, rxrpc_local_put_peer); kfree_rcu(peer, rcu); } From patchwork Wed Nov 30 16:55:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27884 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038634wrr; Wed, 30 Nov 2022 09:00:03 -0800 (PST) X-Google-Smtp-Source: AA0mqf5SCcIEJE4pVAMSr9z0AIIJ77UKXh3WnfmLhkEWokWCDZBGMrX9MtWbClEHFjobKPsEtNKO X-Received: by 2002:a17:903:1cc:b0:185:5453:5e01 with SMTP id e12-20020a17090301cc00b0018554535e01mr41905951plh.113.1669827603195; Wed, 30 Nov 2022 09:00:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827603; cv=none; d=google.com; s=arc-20160816; b=dwomvryoUE17bxMJJLcNl7iFKtM4Qb3eX6Hka0TbpzdBpsexfad5Uyvse44ewCaj4o M4hIAI8VYtDETrICL6Es2Nj+sj/60E6wiLKSlKqK1575c+Hss14woPtQjceRGQONGApA jKMxtQp4A0u00s6eOtIXKQPsUz12wLkzF9+PzbZlZAwAtqDjfmJYio5CRgLdhYkNyh0V J0rrbZ3+h39jOZZ47iagmG2TNRbP/zeXjLlr8XpEPT60jMKgreGAxLPcAbaGjzCJHrkI HSvNpQrPt1MOtNWysfybE0wk9bNhCyQdN86zBMOsgU1YaTVpFXKNEOF4EvWCguMiNflK bsXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=0n8Vbrv97z4SNIRzk58nGPmQUujEQPXlRC+fxQTL5XE=; b=fh5SFn/9zSY3n6l07Ph8YF0I49zLVXGYce63AJt7iNcYA+QAiKp600UONmSbMqEaIV Q9QqkMZ5dsTH3kOGaMuoHt7ZjVbXPzT6Yv26oPaDruAry0M1Wu/OzQp7DZn6iuISuDv5 b7dGlxXLJCVPmDHGSn6wpTMCaQu0b+bWnd6v7fG3awZmJEkYNM9LnXto+I/Riz74E373 w26LxVhNu0M3uXxM0mcNaeuc2fT+nYUf31zArQwPKjGOwBK2ZZtVZ7aJ/h9rywawGCGC vTi6gp4YDvvGpQEhUtl2GtjPmTzNEdbSioPKjw1bwW1fIN0qBMHBrHL/us14UUK1yI72 8qcQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ou2mVTiZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f23-20020a635117000000b00476ca041d89si1670312pgb.790.2022.11.30.08.59.49; Wed, 30 Nov 2022 09:00:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ou2mVTiZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230436AbiK3Q5m (ORCPT + 99 others); Wed, 30 Nov 2022 11:57:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230478AbiK3Q5B (ORCPT ); Wed, 30 Nov 2022 11:57:01 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52B718FD70 for ; Wed, 30 Nov 2022 08:55:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827353; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0n8Vbrv97z4SNIRzk58nGPmQUujEQPXlRC+fxQTL5XE=; b=Ou2mVTiZ7yj/Rg2yiUp8XJ4AFw51wiN9R4HK5F++B0qHRIg+ouYYlddwK4jjtZS5uWNoIg uzTwFj23niZcnu7uAOvEs/XAkzCdKwWOmXoCnmrWaI8PdAhjYwIlxHJgUDoFcE16Mewj+4 w4bOgYdDvRD0UFOldX3RLJ+T9HjclHU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-448-3qaLn-ezMoed8Fs5cPGfpQ-1; Wed, 30 Nov 2022 11:55:48 -0500 X-MC-Unique: 3qaLn-ezMoed8Fs5cPGfpQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D9E00100FCF1; Wed, 30 Nov 2022 16:55:47 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DDBA20290A5; Wed, 30 Nov 2022 16:55:46 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 10/35] rxrpc: trace: Don't use __builtin_return_address for rxrpc_peer tracing From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:55:44 +0000 Message-ID: <166982734427.621383.161866075051804829.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941148799322019?= X-GMAIL-MSGID: =?utf-8?q?1750941148799322019?= In rxrpc tracing, use enums to generate lists of points of interest rather than __builtin_return_address() for the rxrpc_peer tracepoint Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 43 +++++++++++++++++++++++++----------------- net/rxrpc/af_rxrpc.c | 2 +- net/rxrpc/ar-internal.h | 11 ++++++----- net/rxrpc/call_accept.c | 8 +++++--- net/rxrpc/call_object.c | 2 +- net/rxrpc/conn_client.c | 8 ++++---- net/rxrpc/conn_object.c | 2 +- net/rxrpc/peer_event.c | 8 ++++---- net/rxrpc/peer_object.c | 34 ++++++++++++++++----------------- net/rxrpc/sendmsg.c | 2 +- 10 files changed, 65 insertions(+), 55 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 015569845b1d..1c74143a51c1 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -63,10 +63,23 @@ E_(rxrpc_local_use_work, "USE work ") #define rxrpc_peer_traces \ - EM(rxrpc_peer_got, "GOT") \ - EM(rxrpc_peer_new, "NEW") \ - EM(rxrpc_peer_processing, "PRO") \ - E_(rxrpc_peer_put, "PUT") + EM(rxrpc_peer_free, "FREE ") \ + EM(rxrpc_peer_get_accept, "GET accept ") \ + EM(rxrpc_peer_get_activate_call, "GET act-call") \ + EM(rxrpc_peer_get_bundle, "GET bundle ") \ + EM(rxrpc_peer_get_client_conn, "GET cln-conn") \ + EM(rxrpc_peer_get_input_error, "GET inpt-err") \ + EM(rxrpc_peer_get_keepalive, "GET keepaliv") \ + EM(rxrpc_peer_get_lookup_client, "GET look-cln") \ + EM(rxrpc_peer_get_service_conn, "GET srv-conn") \ + EM(rxrpc_peer_new_client, "NEW client ") \ + EM(rxrpc_peer_new_prealloc, "NEW prealloc") \ + EM(rxrpc_peer_put_bundle, "PUT bundle ") \ + EM(rxrpc_peer_put_call, "PUT call ") \ + EM(rxrpc_peer_put_conn, "PUT conn ") \ + EM(rxrpc_peer_put_discard_tmp, "PUT disc-tmp") \ + EM(rxrpc_peer_put_input_error, "PUT inpt-err") \ + E_(rxrpc_peer_put_keepalive, "PUT keepaliv") #define rxrpc_conn_traces \ EM(rxrpc_conn_got, "GOT") \ @@ -394,30 +407,26 @@ TRACE_EVENT(rxrpc_local, ); TRACE_EVENT(rxrpc_peer, - TP_PROTO(unsigned int peer_debug_id, enum rxrpc_peer_trace op, - int usage, const void *where), + TP_PROTO(unsigned int peer_debug_id, int ref, enum rxrpc_peer_trace why), - TP_ARGS(peer_debug_id, op, usage, where), + TP_ARGS(peer_debug_id, ref, why), TP_STRUCT__entry( __field(unsigned int, peer ) - __field(int, op ) - __field(int, usage ) - __field(const void *, where ) + __field(int, ref ) + __field(int, why ) ), TP_fast_assign( __entry->peer = peer_debug_id; - __entry->op = op; - __entry->usage = usage; - __entry->where = where; + __entry->ref = ref; + __entry->why = why; ), - TP_printk("P=%08x %s u=%d sp=%pSR", + TP_printk("P=%08x %s r=%d", __entry->peer, - __print_symbolic(__entry->op, rxrpc_peer_traces), - __entry->usage, - __entry->where) + __print_symbolic(__entry->why, rxrpc_peer_traces), + __entry->ref) ); TRACE_EVENT(rxrpc_conn, diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 989ebca899f3..7a0dc01741e7 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -328,7 +328,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, mutex_unlock(&call->user_mutex); } - rxrpc_put_peer(cp.peer); + rxrpc_put_peer(cp.peer, rxrpc_peer_put_discard_tmp); _leave(" = %p", call); return call; } diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index dde9ce21ef48..6cb111e9761c 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -1063,14 +1063,15 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *, const struct sockaddr_rxrpc *); struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *, struct rxrpc_local *, struct sockaddr_rxrpc *, gfp_t); -struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t); +struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t, + enum rxrpc_peer_trace); void rxrpc_new_incoming_peer(struct rxrpc_sock *, struct rxrpc_local *, struct rxrpc_peer *); void rxrpc_destroy_all_peers(struct rxrpc_net *); -struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *); -struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *); -void rxrpc_put_peer(struct rxrpc_peer *); -void rxrpc_put_peer_locked(struct rxrpc_peer *); +struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *, enum rxrpc_peer_trace); +struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *, enum rxrpc_peer_trace); +void rxrpc_put_peer(struct rxrpc_peer *, enum rxrpc_peer_trace); +void rxrpc_put_peer_locked(struct rxrpc_peer *, enum rxrpc_peer_trace); /* * proc.c diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 1b12d4e28373..f6bc3b07c3e5 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -70,7 +70,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, head = b->peer_backlog_head; tail = READ_ONCE(b->peer_backlog_tail); if (CIRC_CNT(head, tail, size) < max) { - struct rxrpc_peer *peer = rxrpc_alloc_peer(rx->local, gfp); + struct rxrpc_peer *peer; + + peer = rxrpc_alloc_peer(rx->local, gfp, rxrpc_peer_new_prealloc); if (!peer) return -ENOMEM; b->peer_backlog[head] = peer; @@ -286,7 +288,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, return NULL; if (!conn) { - if (peer && !rxrpc_get_peer_maybe(peer)) + if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_service_conn)) peer = NULL; if (!peer) { peer = b->peer_backlog[peer_tail]; @@ -323,7 +325,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, call->conn = conn; call->security = conn->security; call->security_ix = conn->security_ix; - call->peer = rxrpc_get_peer(conn->peer); + call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_accept); call->cong_ssthresh = call->peer->cong_ssthresh; call->tx_last_sent = ktime_get_real(); return call; diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 59928f0a8fe1..1b725afd6e2c 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -636,7 +636,7 @@ static void rxrpc_destroy_call(struct work_struct *work) rxrpc_delete_call_timer(call); rxrpc_put_connection(call->conn); - rxrpc_put_peer(call->peer); + rxrpc_put_peer(call->peer, rxrpc_peer_put_call); kmem_cache_free(rxrpc_call_jar, call); if (atomic_dec_and_test(&rxnet->nr_calls)) wake_up_var(&rxnet->nr_calls); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 9a69b4c1b182..9444da235a48 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -123,7 +123,7 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp, bundle = kzalloc(sizeof(*bundle), gfp); if (bundle) { bundle->local = cp->local; - bundle->peer = rxrpc_get_peer(cp->peer); + bundle->peer = rxrpc_get_peer(cp->peer, rxrpc_peer_get_bundle); bundle->key = cp->key; bundle->exclusive = cp->exclusive; bundle->upgrade = cp->upgrade; @@ -145,7 +145,7 @@ struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle) static void rxrpc_free_bundle(struct rxrpc_bundle *bundle) { - rxrpc_put_peer(bundle->peer); + rxrpc_put_peer(bundle->peer, rxrpc_peer_put_bundle); kfree(bundle); } @@ -207,7 +207,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) write_unlock(&rxnet->conn_lock); rxrpc_get_bundle(bundle); - rxrpc_get_peer(conn->peer); + rxrpc_get_peer(conn->peer, rxrpc_peer_get_client_conn); rxrpc_get_local(conn->local, rxrpc_local_get_client_conn); key_get(conn->key); @@ -543,7 +543,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, rxrpc_see_call(call); list_del_init(&call->chan_wait_link); - call->peer = rxrpc_get_peer(conn->peer); + call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call); call->conn = rxrpc_get_connection(conn); call->cid = conn->proto.cid | channel; call->call_id = call_id; diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 725359afeac0..554ee5dd3325 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -362,7 +362,7 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu) conn->security->clear(conn); key_put(conn->key); rxrpc_put_bundle(conn->bundle); - rxrpc_put_peer(conn->peer); + rxrpc_put_peer(conn->peer, rxrpc_peer_put_conn); if (atomic_dec_and_test(&conn->local->rxnet->nr_conns)) wake_up_var(&conn->local->rxnet->nr_conns); diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index 3f8d104ecaa7..5e97d321ac38 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -168,7 +168,7 @@ void rxrpc_error_report(struct sock *sk) } peer = rxrpc_lookup_peer_local_rcu(local, skb, &srx); - if (peer && !rxrpc_get_peer_maybe(peer)) + if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_input_error)) peer = NULL; if (!peer) { rcu_read_unlock(); @@ -190,7 +190,7 @@ void rxrpc_error_report(struct sock *sk) out: rcu_read_unlock(); rxrpc_free_skb(skb, rxrpc_skb_freed); - rxrpc_put_peer(peer); + rxrpc_put_peer(peer, rxrpc_peer_put_input_error); _leave(""); } @@ -263,7 +263,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, struct rxrpc_peer, keepalive_link); list_del_init(&peer->keepalive_link); - if (!rxrpc_get_peer_maybe(peer)) + if (!rxrpc_get_peer_maybe(peer, rxrpc_peer_get_keepalive)) continue; if (__rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive)) { @@ -291,7 +291,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, &rxnet->peer_keepalive[slot & mask]); rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive); } - rxrpc_put_peer_locked(peer); + rxrpc_put_peer_locked(peer, rxrpc_peer_put_keepalive); } spin_unlock_bh(&rxnet->peer_hash_lock); diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c index bcef897560e7..9e682a60a800 100644 --- a/net/rxrpc/peer_object.c +++ b/net/rxrpc/peer_object.c @@ -205,9 +205,9 @@ static void rxrpc_assess_MTU_size(struct rxrpc_sock *rx, /* * Allocate a peer. */ -struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp) +struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp, + enum rxrpc_peer_trace why) { - const void *here = __builtin_return_address(0); struct rxrpc_peer *peer; _enter(""); @@ -226,7 +226,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp) rxrpc_peer_init_rtt(peer); peer->cong_ssthresh = RXRPC_TX_MAX_WINDOW; - trace_rxrpc_peer(peer->debug_id, rxrpc_peer_new, 1, here); + trace_rxrpc_peer(peer->debug_id, why, 1); } _leave(" = %p", peer); @@ -282,7 +282,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx, _enter(""); - peer = rxrpc_alloc_peer(local, gfp); + peer = rxrpc_alloc_peer(local, gfp, rxrpc_peer_new_client); if (peer) { memcpy(&peer->srx, srx, sizeof(*srx)); rxrpc_init_peer(rx, peer, hash_key); @@ -294,6 +294,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx, static void rxrpc_free_peer(struct rxrpc_peer *peer) { + trace_rxrpc_peer(peer->debug_id, 0, rxrpc_peer_free); rxrpc_put_local(peer->local, rxrpc_local_put_peer); kfree_rcu(peer, rcu); } @@ -334,7 +335,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, /* search the peer list first */ rcu_read_lock(); peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key); - if (peer && !rxrpc_get_peer_maybe(peer)) + if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_lookup_client)) peer = NULL; rcu_read_unlock(); @@ -352,7 +353,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, /* Need to check that we aren't racing with someone else */ peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key); - if (peer && !rxrpc_get_peer_maybe(peer)) + if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_lookup_client)) peer = NULL; if (!peer) { hash_add_rcu(rxnet->peer_hash, @@ -376,27 +377,26 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, /* * Get a ref on a peer record. */ -struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer) +struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer, enum rxrpc_peer_trace why) { - const void *here = __builtin_return_address(0); int r; __refcount_inc(&peer->ref, &r); - trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, r + 1, here); + trace_rxrpc_peer(peer->debug_id, why, r + 1); return peer; } /* * Get a ref on a peer record unless its usage has already reached 0. */ -struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer) +struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer, + enum rxrpc_peer_trace why) { - const void *here = __builtin_return_address(0); int r; if (peer) { if (__refcount_inc_not_zero(&peer->ref, &r)) - trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, r + 1, here); + trace_rxrpc_peer(peer->debug_id, r + 1, why); else peer = NULL; } @@ -423,9 +423,8 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer) /* * Drop a ref on a peer record. */ -void rxrpc_put_peer(struct rxrpc_peer *peer) +void rxrpc_put_peer(struct rxrpc_peer *peer, enum rxrpc_peer_trace why) { - const void *here = __builtin_return_address(0); unsigned int debug_id; bool dead; int r; @@ -433,7 +432,7 @@ void rxrpc_put_peer(struct rxrpc_peer *peer) if (peer) { debug_id = peer->debug_id; dead = __refcount_dec_and_test(&peer->ref, &r); - trace_rxrpc_peer(debug_id, rxrpc_peer_put, r - 1, here); + trace_rxrpc_peer(debug_id, r - 1, why); if (dead) __rxrpc_put_peer(peer); } @@ -443,15 +442,14 @@ void rxrpc_put_peer(struct rxrpc_peer *peer) * Drop a ref on a peer record where the caller already holds the * peer_hash_lock. */ -void rxrpc_put_peer_locked(struct rxrpc_peer *peer) +void rxrpc_put_peer_locked(struct rxrpc_peer *peer, enum rxrpc_peer_trace why) { - const void *here = __builtin_return_address(0); unsigned int debug_id = peer->debug_id; bool dead; int r; dead = __refcount_dec_and_test(&peer->ref, &r); - trace_rxrpc_peer(debug_id, rxrpc_peer_put, r - 1, here); + trace_rxrpc_peer(debug_id, r - 1, why); if (dead) { hash_del_rcu(&peer->hash_link); list_del_init(&peer->keepalive_link); diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index e5fd8a95bf71..cfe0badba0b3 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -604,7 +604,7 @@ rxrpc_new_client_call_for_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, atomic_inc_return(&rxrpc_debug_id)); /* The socket is now unlocked */ - rxrpc_put_peer(cp.peer); + rxrpc_put_peer(cp.peer, rxrpc_peer_put_discard_tmp); _leave(" = %p\n", call); return call; } From patchwork Wed Nov 30 16:55:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27885 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038730wrr; Wed, 30 Nov 2022 09:00:12 -0800 (PST) X-Google-Smtp-Source: AA0mqf6hM1IQHMCfew2svUekV8z7vcQm45dWGYHWkjSDDlsn84cnWRgr5lE4X9OlbDKgiRwcd71E X-Received: by 2002:a17:90b:1102:b0:212:d76f:b9e6 with SMTP id gi2-20020a17090b110200b00212d76fb9e6mr69954741pjb.224.1669827612140; Wed, 30 Nov 2022 09:00:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827612; cv=none; d=google.com; s=arc-20160816; b=lQX77HtRri8BBzhAMd07NazWCBMfHcviTksgUUkKxFI2zuV4pzQtdY1daVDuilKPFJ V7DQWyuA5QjFkBWCRbYWB93oZXsoHYdQf5zCrmPbZM3Biqc0AaCfyCaZu04FJ0A375AV n3fZvAoK5rxBozkQqWX2zVYipDr9SIjDQT9jpKuAjLkQn5udAs3g4tPj44ufBq2cCS3i y4ub/2zqFYY0R2w+24RK+GZXKGrkNHd3ZRx3hwGNlxwbjzUMbHebNoBLafZxKBvMG1sa ftSBncIJEq9Rt2saG6kvV8k0HLrzRvgQrkD/az53La2YsaQTsMjwBxKVHFBGsu5vyzeH nj0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=XilLwybICsq41/oo+GVJRMcE1JWK/wGEmIz1NglHaXQ=; b=DguE4qK7snYK8O3/OGM7tXBcKGsU3Y/4iMyf3CXMxTP1eC7o7q+SOoAYyob3DzHxSe tZJdDBPQpjss5gVqvgFhK+SqI4eTsQJCIdlHSJnFqwiceURcvad6cyK8a8u9+VaKbV5s su48QD/wdhfvGd5QfvksXtRAnGfX3JploIxf3npSrDWgW7m7RoOP06rwfVLYv3bOc9Wc wAvNVbYHSZJs9D6DeTMLNhI2K2snVFTo81sGt16GujiZfyt+iPxt0KgHd6kZKQR+Zva4 f4k5P7s21suPjMRvuOuWfyRUWLygpE27j8OPup0Rf2njru5jBOqjktRDqCYdNTLG/lyX YDug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aprQGN44; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nu3-20020a17090b1b0300b0020a47a4c951si2176005pjb.147.2022.11.30.08.59.59; Wed, 30 Nov 2022 09:00:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aprQGN44; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230461AbiK3Q6E (ORCPT + 99 others); Wed, 30 Nov 2022 11:58:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230498AbiK3Q5F (ORCPT ); Wed, 30 Nov 2022 11:57:05 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC2438FD62 for ; Wed, 30 Nov 2022 08:56:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827363; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XilLwybICsq41/oo+GVJRMcE1JWK/wGEmIz1NglHaXQ=; b=aprQGN44Pw1iRwPUYwV4wiymELPNMDoQSnvNHCNKiYW+jo2GPXurNyQJdAbn/xJvJmQGly rcr0mk6+lJ2PePApzBR2jE/rwm6tStHnKa7wyzNLJcjW7LQmrgt3ToXI5e/jjTPt+CbB5i ySV7hckOIbN3vQPmeLb7Lz7FJCOgjmg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-665-4If4uVqIOs6zTMzlexgKCQ-1; Wed, 30 Nov 2022 11:55:57 -0500 X-MC-Unique: 4If4uVqIOs6zTMzlexgKCQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9EBAF100F7FE; Wed, 30 Nov 2022 16:55:56 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id C65692166B26; Wed, 30 Nov 2022 16:55:55 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 11/35] rxrpc: trace: Don't use __builtin_return_address for rxrpc_conn tracing From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:55:53 +0000 Message-ID: <166982735304.621383.2105219559780451282.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941158246086894?= X-GMAIL-MSGID: =?utf-8?q?1750941158246086894?= In rxrpc tracing, use enums to generate lists of points of interest rather than __builtin_return_address() for the rxrpc_conn tracepoint Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 58 +++++++++++++++++++++++++++--------------- net/rxrpc/ar-internal.h | 21 +++++++++------ net/rxrpc/call_accept.c | 9 ++----- net/rxrpc/call_object.c | 2 + net/rxrpc/conn_client.c | 28 ++++++++++---------- net/rxrpc/conn_event.c | 4 +-- net/rxrpc/conn_object.c | 40 +++++++++++++++-------------- net/rxrpc/conn_service.c | 4 +-- net/rxrpc/input.c | 2 + 9 files changed, 92 insertions(+), 76 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 1c74143a51c1..e09568a8c173 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -82,14 +82,34 @@ E_(rxrpc_peer_put_keepalive, "PUT keepaliv") #define rxrpc_conn_traces \ - EM(rxrpc_conn_got, "GOT") \ - EM(rxrpc_conn_new_client, "NWc") \ - EM(rxrpc_conn_new_service, "NWs") \ - EM(rxrpc_conn_put_client, "PTc") \ - EM(rxrpc_conn_put_service, "PTs") \ - EM(rxrpc_conn_queued, "QUE") \ - EM(rxrpc_conn_reap_service, "RPs") \ - E_(rxrpc_conn_seen, "SEE") + EM(rxrpc_conn_free, "FREE ") \ + EM(rxrpc_conn_get_activate_call, "GET act-call") \ + EM(rxrpc_conn_get_call_input, "GET inp-call") \ + EM(rxrpc_conn_get_conn_input, "GET inp-conn") \ + EM(rxrpc_conn_get_idle, "GET idle ") \ + EM(rxrpc_conn_get_poke, "GET poke ") \ + EM(rxrpc_conn_get_service_conn, "GET svc-conn") \ + EM(rxrpc_conn_new_client, "NEW client ") \ + EM(rxrpc_conn_new_service, "NEW service ") \ + EM(rxrpc_conn_put_already_queued, "PUT alreadyq") \ + EM(rxrpc_conn_put_call, "PUT call ") \ + EM(rxrpc_conn_put_call_input, "PUT inp-call") \ + EM(rxrpc_conn_put_conn_input, "PUT inp-conn") \ + EM(rxrpc_conn_put_discard, "PUT discard ") \ + EM(rxrpc_conn_put_discard_idle, "PUT disc-idl") \ + EM(rxrpc_conn_put_local_dead, "PUT loc-dead") \ + EM(rxrpc_conn_put_noreuse, "PUT noreuse ") \ + EM(rxrpc_conn_put_poke, "PUT poke ") \ + EM(rxrpc_conn_put_unbundle, "PUT unbundle") \ + EM(rxrpc_conn_put_unidle, "PUT unidle ") \ + EM(rxrpc_conn_put_work, "PUT work ") \ + EM(rxrpc_conn_queue_challenge, "GQ chall ") \ + EM(rxrpc_conn_queue_retry_work, "GQ retry-wk") \ + EM(rxrpc_conn_queue_rx_work, "GQ rx-work ") \ + EM(rxrpc_conn_queue_timer, "GQ timer ") \ + EM(rxrpc_conn_see_new_service_conn, "SEE new-svc ") \ + EM(rxrpc_conn_see_reap_service, "SEE reap-svc") \ + E_(rxrpc_conn_see_work, "SEE work ") #define rxrpc_client_traces \ EM(rxrpc_client_activate_chans, "Activa") \ @@ -430,30 +450,26 @@ TRACE_EVENT(rxrpc_peer, ); TRACE_EVENT(rxrpc_conn, - TP_PROTO(unsigned int conn_debug_id, enum rxrpc_conn_trace op, - int usage, const void *where), + TP_PROTO(unsigned int conn_debug_id, int ref, enum rxrpc_conn_trace why), - TP_ARGS(conn_debug_id, op, usage, where), + TP_ARGS(conn_debug_id, ref, why), TP_STRUCT__entry( __field(unsigned int, conn ) - __field(int, op ) - __field(int, usage ) - __field(const void *, where ) + __field(int, ref ) + __field(int, why ) ), TP_fast_assign( __entry->conn = conn_debug_id; - __entry->op = op; - __entry->usage = usage; - __entry->where = where; + __entry->ref = ref; + __entry->why = why; ), - TP_printk("C=%08x %s u=%d sp=%pSR", + TP_printk("C=%08x %s r=%d", __entry->conn, - __print_symbolic(__entry->op, rxrpc_conn_traces), - __entry->usage, - __entry->where) + __print_symbolic(__entry->why, rxrpc_conn_traces), + __entry->ref) ); TRACE_EVENT(rxrpc_client, diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 6cb111e9761c..bc8281c410c5 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -882,7 +882,7 @@ int rxrpc_connect_call(struct rxrpc_sock *, struct rxrpc_call *, gfp_t); void rxrpc_expose_client_call(struct rxrpc_call *); void rxrpc_disconnect_client_call(struct rxrpc_bundle *, struct rxrpc_call *); -void rxrpc_put_client_conn(struct rxrpc_connection *); +void rxrpc_put_client_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); void rxrpc_discard_expired_client_conns(struct work_struct *); void rxrpc_destroy_all_client_connections(struct rxrpc_net *); void rxrpc_clean_up_local_conns(struct rxrpc_local *); @@ -906,11 +906,13 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *, void __rxrpc_disconnect_call(struct rxrpc_connection *, struct rxrpc_call *); void rxrpc_disconnect_call(struct rxrpc_call *); void rxrpc_kill_connection(struct rxrpc_connection *); -bool rxrpc_queue_conn(struct rxrpc_connection *); -void rxrpc_see_connection(struct rxrpc_connection *); -struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *); -struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *); -void rxrpc_put_service_conn(struct rxrpc_connection *); +bool rxrpc_queue_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); +void rxrpc_see_connection(struct rxrpc_connection *, enum rxrpc_conn_trace); +struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *, + enum rxrpc_conn_trace); +struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *, + enum rxrpc_conn_trace); +void rxrpc_put_service_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); void rxrpc_service_connection_reaper(struct work_struct *); void rxrpc_destroy_all_connections(struct rxrpc_net *); @@ -924,15 +926,16 @@ static inline bool rxrpc_conn_is_service(const struct rxrpc_connection *conn) return !rxrpc_conn_is_client(conn); } -static inline void rxrpc_put_connection(struct rxrpc_connection *conn) +static inline void rxrpc_put_connection(struct rxrpc_connection *conn, + enum rxrpc_conn_trace why) { if (!conn) return; if (rxrpc_conn_is_client(conn)) - rxrpc_put_client_conn(conn); + rxrpc_put_client_conn(conn, why); else - rxrpc_put_service_conn(conn); + rxrpc_put_service_conn(conn, why); } static inline void rxrpc_reduce_conn_timer(struct rxrpc_connection *conn, diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index f6bc3b07c3e5..04b52e28e0cc 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -91,9 +91,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, b->conn_backlog[head] = conn; smp_store_release(&b->conn_backlog_head, (head + 1) & (size - 1)); - - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service, - refcount_read(&conn->ref), here); } /* Now it gets complicated, because calls get registered with the @@ -309,10 +306,10 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, (conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); conn->local = rxrpc_get_local(local, rxrpc_local_get_prealloc_conn); conn->peer = peer; - rxrpc_see_connection(conn); + rxrpc_see_connection(conn, rxrpc_conn_see_new_service_conn); rxrpc_new_incoming_connection(rx, conn, sec, skb); } else { - rxrpc_get_connection(conn); + rxrpc_get_connection(conn, rxrpc_conn_get_service_conn); } /* And now we can allocate and set up a new call */ @@ -402,7 +399,7 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, case RXRPC_CONN_SERVICE_UNSECURED: conn->state = RXRPC_CONN_SERVICE_CHALLENGING; set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events); - rxrpc_queue_conn(call->conn); + rxrpc_queue_conn(call->conn, rxrpc_conn_queue_challenge); break; case RXRPC_CONN_SERVICE: diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 1b725afd6e2c..29ec4013aa0b 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -635,7 +635,7 @@ static void rxrpc_destroy_call(struct work_struct *work) rxrpc_delete_call_timer(call); - rxrpc_put_connection(call->conn); + rxrpc_put_connection(call->conn, rxrpc_conn_put_call); rxrpc_put_peer(call->peer, rxrpc_peer_put_call); kmem_cache_free(rxrpc_call_jar, call); if (atomic_dec_and_test(&rxnet->nr_calls)) diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 9444da235a48..dcfec6a45255 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -211,9 +211,8 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) rxrpc_get_local(conn->local, rxrpc_local_get_client_conn); key_get(conn->key); - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client, - refcount_read(&conn->ref), - __builtin_return_address(0)); + trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref), + rxrpc_conn_new_client); atomic_inc(&rxnet->nr_client_conns); trace_rxrpc_client(conn, -1, rxrpc_client_alloc); @@ -467,10 +466,10 @@ static void rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, gfp_t gfp) if (candidate) { _debug("discard C=%x", candidate->debug_id); trace_rxrpc_client(candidate, -1, rxrpc_client_duplicate); - rxrpc_put_connection(candidate); + rxrpc_put_connection(candidate, rxrpc_conn_put_discard); } - rxrpc_put_connection(old); + rxrpc_put_connection(old, rxrpc_conn_put_noreuse); _leave(""); } @@ -544,7 +543,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, rxrpc_see_call(call); list_del_init(&call->chan_wait_link); call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call); - call->conn = rxrpc_get_connection(conn); + call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call); call->cid = conn->proto.cid | channel; call->call_id = call_id; call->security = conn->security; @@ -592,7 +591,7 @@ static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connecti } spin_unlock(&rxnet->client_conn_cache_lock); if (drop_ref) - rxrpc_put_connection(conn); + rxrpc_put_connection(conn, rxrpc_conn_put_unidle); } } @@ -896,7 +895,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call trace_rxrpc_client(conn, channel, rxrpc_client_to_idle); conn->idle_timestamp = jiffies; - rxrpc_get_connection(conn); + rxrpc_get_connection(conn, rxrpc_conn_get_idle); spin_lock(&rxnet->client_conn_cache_lock); list_move_tail(&conn->cache_link, &rxnet->idle_client_conns); spin_unlock(&rxnet->client_conn_cache_lock); @@ -938,7 +937,7 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn) if (need_drop) { rxrpc_deactivate_bundle(bundle); - rxrpc_put_connection(conn); + rxrpc_put_connection(conn, rxrpc_conn_put_unbundle); } } @@ -983,15 +982,15 @@ static void rxrpc_kill_client_conn(struct rxrpc_connection *conn) /* * Clean up a dead client connections. */ -void rxrpc_put_client_conn(struct rxrpc_connection *conn) +void rxrpc_put_client_conn(struct rxrpc_connection *conn, + enum rxrpc_conn_trace why) { - const void *here = __builtin_return_address(0); unsigned int debug_id = conn->debug_id; bool dead; int r; dead = __refcount_dec_and_test(&conn->ref, &r); - trace_rxrpc_conn(debug_id, rxrpc_conn_put_client, r - 1, here); + trace_rxrpc_conn(debug_id, r - 1, why); if (dead) rxrpc_kill_client_conn(conn); } @@ -1063,7 +1062,8 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) spin_unlock(&rxnet->client_conn_cache_lock); rxrpc_unbundle_conn(conn); - rxrpc_put_connection(conn); /* Drop the ->cache_link ref */ + /* Drop the ->cache_link ref */ + rxrpc_put_connection(conn, rxrpc_conn_put_discard_idle); nr_conns--; goto next; @@ -1134,7 +1134,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local) struct rxrpc_connection, cache_link); list_del_init(&conn->cache_link); rxrpc_unbundle_conn(conn); - rxrpc_put_connection(conn); + rxrpc_put_connection(conn, rxrpc_conn_put_local_dead); } _leave(" [culled]"); diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 225edaf019f1..817f895c77ca 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -472,14 +472,14 @@ void rxrpc_process_connection(struct work_struct *work) struct rxrpc_connection *conn = container_of(work, struct rxrpc_connection, processor); - rxrpc_see_connection(conn); + rxrpc_see_connection(conn, rxrpc_conn_see_work); if (__rxrpc_use_local(conn->local, rxrpc_local_use_conn_work)) { rxrpc_do_process_connection(conn); rxrpc_unuse_local(conn->local, rxrpc_local_unuse_conn_work); } - rxrpc_put_connection(conn); + rxrpc_put_connection(conn, rxrpc_conn_put_work); _leave(""); return; } diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 554ee5dd3325..bbace8d9953d 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -26,7 +26,7 @@ static void rxrpc_connection_timer(struct timer_list *timer) struct rxrpc_connection *conn = container_of(timer, struct rxrpc_connection, timer); - rxrpc_queue_conn(conn); + rxrpc_queue_conn(conn, rxrpc_conn_queue_timer); } /* @@ -260,43 +260,42 @@ void rxrpc_kill_connection(struct rxrpc_connection *conn) * Queue a connection's work processor, getting a ref to pass to the work * queue. */ -bool rxrpc_queue_conn(struct rxrpc_connection *conn) +bool rxrpc_queue_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why) { - const void *here = __builtin_return_address(0); int r; if (!__refcount_inc_not_zero(&conn->ref, &r)) return false; if (rxrpc_queue_work(&conn->processor)) - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_queued, r + 1, here); + trace_rxrpc_conn(conn->debug_id, why, r + 1); else - rxrpc_put_connection(conn); + rxrpc_put_connection(conn, rxrpc_conn_put_already_queued); return true; } /* * Note the re-emergence of a connection. */ -void rxrpc_see_connection(struct rxrpc_connection *conn) +void rxrpc_see_connection(struct rxrpc_connection *conn, + enum rxrpc_conn_trace why) { - const void *here = __builtin_return_address(0); if (conn) { - int n = refcount_read(&conn->ref); + int r = refcount_read(&conn->ref); - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_seen, n, here); + trace_rxrpc_conn(conn->debug_id, r, why); } } /* * Get a ref on a connection. */ -struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn) +struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn, + enum rxrpc_conn_trace why) { - const void *here = __builtin_return_address(0); int r; __refcount_inc(&conn->ref, &r); - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r, here); + trace_rxrpc_conn(conn->debug_id, r + 1, why); return conn; } @@ -304,14 +303,14 @@ struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn) * Try to get a ref on a connection. */ struct rxrpc_connection * -rxrpc_get_connection_maybe(struct rxrpc_connection *conn) +rxrpc_get_connection_maybe(struct rxrpc_connection *conn, + enum rxrpc_conn_trace why) { - const void *here = __builtin_return_address(0); int r; if (conn) { if (__refcount_inc_not_zero(&conn->ref, &r)) - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r + 1, here); + trace_rxrpc_conn(conn->debug_id, r + 1, why); else conn = NULL; } @@ -331,14 +330,14 @@ static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet, /* * Release a service connection */ -void rxrpc_put_service_conn(struct rxrpc_connection *conn) +void rxrpc_put_service_conn(struct rxrpc_connection *conn, + enum rxrpc_conn_trace why) { - const void *here = __builtin_return_address(0); unsigned int debug_id = conn->debug_id; int r; __refcount_dec(&conn->ref, &r); - trace_rxrpc_conn(debug_id, rxrpc_conn_put_service, r - 1, here); + trace_rxrpc_conn(debug_id, r - 1, why); if (r - 1 == 1) rxrpc_set_service_reap_timer(conn->local->rxnet, jiffies + rxrpc_connection_expiry); @@ -354,6 +353,9 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu) _enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref)); + trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref), + rxrpc_conn_free); + ASSERTCMP(refcount_read(&conn->ref), ==, 0); del_timer_sync(&conn->timer); @@ -419,7 +421,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work) */ if (!refcount_dec_if_one(&conn->ref)) continue; - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_reap_service, 0, NULL); + rxrpc_see_connection(conn, rxrpc_conn_see_reap_service); if (rxrpc_conn_is_client(conn)) BUG(); diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index a3b91864ef21..bf087213bd4d 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -141,9 +141,7 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn list_add_tail(&conn->proc_link, &rxnet->conn_proc_list); write_unlock(&rxnet->conn_lock); - trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service, - refcount_read(&conn->ref), - __builtin_return_address(0)); + rxrpc_see_connection(conn, rxrpc_conn_new_service); } return conn; diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index cecfd201d832..c8ff7489b412 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1121,7 +1121,7 @@ static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, _enter("%p,%p", conn, skb); skb_queue_tail(&conn->rx_queue, skb); - rxrpc_queue_conn(conn); + rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work); } /* From patchwork Wed Nov 30 16:56:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27893 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1040558wrr; Wed, 30 Nov 2022 09:02:50 -0800 (PST) X-Google-Smtp-Source: AA0mqf45CKf6ibr4dKIoWJUsiKGDi+9ZTgNfmQ/+L+Q0zKrNu/TnR6m9/SXf+wnf810wgzIkibOX X-Received: by 2002:a05:6402:b55:b0:469:bd35:655c with SMTP id bx21-20020a0564020b5500b00469bd35655cmr40985110edb.245.1669827769880; Wed, 30 Nov 2022 09:02:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827769; cv=none; d=google.com; s=arc-20160816; b=jUMTRglcGwHUeepQMwQss6pVoKFjIFdCJVxNZ7YIDxt1eX0kPjriSGOgnJf4hfGLdo 12cMWUzeDCNHNihyI90DUz47algs3lKzlAtBkeY1oZsf+7QlJQT/1ScV/Z4rWyQifVXB dOVNHQA+uLZJp9d+/xsvFmmGTQDmVtQsx+hhbNp4bKPMbH1mK1L48dQlQ7u0amEnE0hh tpanVUeAJCU2pDRb57+8XkTleByr0cYB06yda55oDuVxegUC2Z+nRRMfHx+SXPV91iy/ y1zY93qBf/pW1duRm8WL+4to74shcJNRJGcwD/MM+PsuccGWv3htW+Z9XwhXg0ZQj48U nrsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=iyOF7gG+QYSUiYaAyp1hNBrHWsnDHaJGxZh5igSu+Bc=; b=hLpMvsrJKXp2k9iB8c4IRYGDptJKj05C6j93qSpSpN6LqdJtjtWqipMToRApP9LF1Z XoQ3bCDUHyeAn2ZfNAAKgzZjk80TWPyz6VNitOjYZk3xefccU9izOFV/2Qjw2RJ+qMjD /o8FOVep6ONSAMtpVxCNvBLU9ZHqHajkZptVy7vslzE/GeSAxoxJq6oFuxBe+GHmWdDs zgUvS01GszWada9KGUKD2XHffsOiGTIMZ+kUqxlSMHTnLuP9ECIh0+da6o97MenK240v P+d+xLLFHnOZDWxrvOeAW/wWW+U5ddVIMjtM8lYCNDgd68oXgEa/nihHHWrF5QwJTmPJ TPtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="SPt/9s6O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lg15-20020a170906f88f00b007be268708b3si1290374ejb.926.2022.11.30.09.02.24; Wed, 30 Nov 2022 09:02:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="SPt/9s6O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230376AbiK3Q6K (ORCPT + 99 others); Wed, 30 Nov 2022 11:58:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230403AbiK3Q5M (ORCPT ); Wed, 30 Nov 2022 11:57:12 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D4C88DFD4 for ; Wed, 30 Nov 2022 08:56:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827369; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iyOF7gG+QYSUiYaAyp1hNBrHWsnDHaJGxZh5igSu+Bc=; b=SPt/9s6OGSnPIPlQuaQRMFGR9BHve+HpSDrdM/bDMvDu5PhS81AWBJEUtKf9Hq70NIHnwO LR8//lG6/GDJfNo1x2BFO9mX6SyOnFzb88ccbShlld2uwW8B7FfT91dNZsuLEiYAi9FhHM 7KYXFIDaTFIv3mB5lNs9/q7Gm/M9MtI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-264-UwMERweMOS2V8ymBlsZRKQ-1; Wed, 30 Nov 2022 11:56:05 -0500 X-MC-Unique: UwMERweMOS2V8ymBlsZRKQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 741941C1D1A4; Wed, 30 Nov 2022 16:56:05 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 99C792024CBE; Wed, 30 Nov 2022 16:56:04 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 12/35] rxrpc: trace: Don't use __builtin_return_address for rxrpc_call tracing From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:01 +0000 Message-ID: <166982736179.621383.14214149882082452223.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941323984249160?= X-GMAIL-MSGID: =?utf-8?q?1750941323984249160?= In rxrpc tracing, use enums to generate lists of points of interest rather than __builtin_return_address() for the rxrpc_call tracepoint Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 83 ++++++++++++++++++++-------------- net/rxrpc/ar-internal.h | 8 ++- net/rxrpc/call_accept.c | 16 +++---- net/rxrpc/call_event.c | 8 ++- net/rxrpc/call_object.c | 102 ++++++++++++++++++------------------------ net/rxrpc/conn_client.c | 2 - net/rxrpc/input.c | 8 ++- net/rxrpc/output.c | 2 - net/rxrpc/peer_event.c | 2 - net/rxrpc/recvmsg.c | 8 ++- net/rxrpc/sendmsg.c | 4 +- 11 files changed, 121 insertions(+), 122 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index e09568a8c173..3f6de4294148 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -127,26 +127,44 @@ E_(rxrpc_client_to_idle, "->Idle") #define rxrpc_call_traces \ - EM(rxrpc_call_connected, "CON") \ - EM(rxrpc_call_error, "*E*") \ - EM(rxrpc_call_got, "GOT") \ - EM(rxrpc_call_got_kernel, "Gke") \ - EM(rxrpc_call_got_timer, "GTM") \ - EM(rxrpc_call_got_tx, "Gtx") \ - EM(rxrpc_call_got_userid, "Gus") \ - EM(rxrpc_call_new_client, "NWc") \ - EM(rxrpc_call_new_service, "NWs") \ - EM(rxrpc_call_put, "PUT") \ - EM(rxrpc_call_put_kernel, "Pke") \ - EM(rxrpc_call_put_noqueue, "PnQ") \ - EM(rxrpc_call_put_notimer, "PnT") \ - EM(rxrpc_call_put_timer, "PTM") \ - EM(rxrpc_call_put_tx, "Ptx") \ - EM(rxrpc_call_put_userid, "Pus") \ - EM(rxrpc_call_queued, "QUE") \ - EM(rxrpc_call_queued_ref, "QUR") \ - EM(rxrpc_call_release, "RLS") \ - E_(rxrpc_call_seen, "SEE") + EM(rxrpc_call_get_input, "GET input ") \ + EM(rxrpc_call_get_kernel_service, "GET krnl-srv") \ + EM(rxrpc_call_get_notify_socket, "GET notify ") \ + EM(rxrpc_call_get_recvmsg, "GET recvmsg ") \ + EM(rxrpc_call_get_release_sock, "GET rel-sock") \ + EM(rxrpc_call_get_sendmsg, "GET sendmsg ") \ + EM(rxrpc_call_get_send_ack, "GET send-ack") \ + EM(rxrpc_call_get_timer, "GET timer ") \ + EM(rxrpc_call_get_userid, "GET user-id ") \ + EM(rxrpc_call_new_client, "NEW client ") \ + EM(rxrpc_call_new_prealloc_service, "NEW prealloc") \ + EM(rxrpc_call_put_already_queued, "PUT alreadyq") \ + EM(rxrpc_call_put_discard_prealloc, "PUT disc-pre") \ + EM(rxrpc_call_put_input, "PUT input ") \ + EM(rxrpc_call_put_kernel, "PUT kernel ") \ + EM(rxrpc_call_put_recvmsg, "PUT recvmsg ") \ + EM(rxrpc_call_put_release_sock, "PUT rls-sock") \ + EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \ + EM(rxrpc_call_put_send_ack, "PUT send-ack") \ + EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \ + EM(rxrpc_call_put_timer, "PUT timer ") \ + EM(rxrpc_call_put_timer_already, "PUT timer-al") \ + EM(rxrpc_call_put_unnotify, "PUT unnotify") \ + EM(rxrpc_call_put_userid_exists, "PUT u-exists") \ + EM(rxrpc_call_put_work, "PUT work ") \ + EM(rxrpc_call_queue_abort, "QUE abort ") \ + EM(rxrpc_call_queue_requeue, "QUE requeue ") \ + EM(rxrpc_call_queue_resend, "QUE resend ") \ + EM(rxrpc_call_queue_timer, "QUE timer ") \ + EM(rxrpc_call_see_accept, "SEE accept ") \ + EM(rxrpc_call_see_activate_client, "SEE act-clnt") \ + EM(rxrpc_call_see_connect_failed, "SEE con-fail") \ + EM(rxrpc_call_see_connected, "SEE connect ") \ + EM(rxrpc_call_see_distribute_error, "SEE dist-err") \ + EM(rxrpc_call_see_input, "SEE input ") \ + EM(rxrpc_call_see_release, "SEE release ") \ + EM(rxrpc_call_see_userid_exists, "SEE u-exists") \ + E_(rxrpc_call_see_zap, "SEE zap ") #define rxrpc_txqueue_traces \ EM(rxrpc_txqueue_await_reply, "AWR") \ @@ -503,32 +521,29 @@ TRACE_EVENT(rxrpc_client, ); TRACE_EVENT(rxrpc_call, - TP_PROTO(unsigned int call_debug_id, enum rxrpc_call_trace op, - int usage, const void *where, const void *aux), + TP_PROTO(unsigned int call_debug_id, int ref, unsigned long aux, + enum rxrpc_call_trace why), - TP_ARGS(call_debug_id, op, usage, where, aux), + TP_ARGS(call_debug_id, ref, aux, why), TP_STRUCT__entry( __field(unsigned int, call ) - __field(int, op ) - __field(int, usage ) - __field(const void *, where ) - __field(const void *, aux ) + __field(int, ref ) + __field(int, why ) + __field(unsigned long, aux ) ), TP_fast_assign( __entry->call = call_debug_id; - __entry->op = op; - __entry->usage = usage; - __entry->where = where; + __entry->ref = ref; + __entry->why = why; __entry->aux = aux; ), - TP_printk("c=%08x %s u=%d sp=%pSR a=%p", + TP_printk("c=%08x %s r=%d a=%lx", __entry->call, - __print_symbolic(__entry->op, rxrpc_call_traces), - __entry->usage, - __entry->where, + __print_symbolic(__entry->why, rxrpc_call_traces), + __entry->ref, __entry->aux) ); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index bc8281c410c5..82eb09b961a0 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -847,10 +847,10 @@ void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *, struct sk_buff *); void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *); void rxrpc_release_calls_on_socket(struct rxrpc_sock *); -bool __rxrpc_queue_call(struct rxrpc_call *); -bool rxrpc_queue_call(struct rxrpc_call *); -void rxrpc_see_call(struct rxrpc_call *); -bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op); +bool __rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace); +bool rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace); +void rxrpc_see_call(struct rxrpc_call *, enum rxrpc_call_trace); +bool rxrpc_try_get_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_cleanup_call(struct rxrpc_call *); diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 04b52e28e0cc..dd4ca4bee77f 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -38,7 +38,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, unsigned long user_call_ID, gfp_t gfp, unsigned int debug_id) { - const void *here = __builtin_return_address(0); struct rxrpc_call *call, *xcall; struct rxrpc_net *rxnet = rxrpc_net(sock_net(&rx->sk)); struct rb_node *parent, **pp; @@ -102,9 +101,8 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, call->flags |= (1 << RXRPC_CALL_IS_SERVICE); call->state = RXRPC_CALL_SERVER_PREALLOC; - trace_rxrpc_call(call->debug_id, rxrpc_call_new_service, - refcount_read(&call->ref), - here, (const void *)user_call_ID); + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), + user_call_ID, rxrpc_call_new_prealloc_service); write_lock(&rx->call_lock); @@ -125,11 +123,11 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, call->user_call_ID = user_call_ID; call->notify_rx = notify_rx; if (user_attach_call) { - rxrpc_get_call(call, rxrpc_call_got_kernel); + rxrpc_get_call(call, rxrpc_call_get_kernel_service); user_attach_call(call, user_call_ID); } - rxrpc_get_call(call, rxrpc_call_got_userid); + rxrpc_get_call(call, rxrpc_call_get_userid); rb_link_node(&call->sock_node, parent, pp); rb_insert_color(&call->sock_node, &rx->calls); set_bit(RXRPC_CALL_HAS_USERID, &call->flags); @@ -229,7 +227,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx) } rxrpc_call_completed(call); rxrpc_release_call(rx, call); - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_discard_prealloc); tail = (tail + 1) & (size - 1); } @@ -318,7 +316,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, smp_store_release(&b->call_backlog_tail, (call_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); - rxrpc_see_call(call); + rxrpc_see_call(call, rxrpc_call_see_accept); call->conn = conn; call->security = conn->security; call->security_ix = conn->security_ix; @@ -430,7 +428,7 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, * (recvmsg queue, to-be-accepted queue or user ID tree) or the kernel * service to prevent the call from being deallocated too early. */ - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_discard_prealloc); _leave(" = %p{%d}", call, call->debug_id); return call; diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 591af8e2e3d0..0c8d2186cda8 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -101,7 +101,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, txb->ack.reason = ack_reason; txb->ack.nAcks = 0; - if (!rxrpc_try_get_call(call, rxrpc_call_got)) { + if (!rxrpc_try_get_call(call, rxrpc_call_get_send_ack)) { rxrpc_put_txbuf(txb, rxrpc_txbuf_put_nomem); return; } @@ -302,7 +302,7 @@ void rxrpc_process_call(struct work_struct *work) unsigned int iterations = 0; rxrpc_serial_t ackr_serial; - rxrpc_see_call(call); + rxrpc_see_call(call, rxrpc_call_see_input); //printk("\n--------------------\n"); _enter("{%d,%s,%lx}", @@ -436,12 +436,12 @@ void rxrpc_process_call(struct work_struct *work) goto requeue; out_put: - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_work); out: _leave(""); return; requeue: - __rxrpc_queue_call(call); + __rxrpc_queue_call(call, rxrpc_call_queue_requeue); goto out; } diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 29ec4013aa0b..afd957f6dc1c 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -53,9 +53,9 @@ static void rxrpc_call_timer_expired(struct timer_list *t) if (call->state < RXRPC_CALL_COMPLETE) { trace_rxrpc_timer_expired(call, jiffies); - __rxrpc_queue_call(call); + __rxrpc_queue_call(call, rxrpc_call_queue_timer); } else { - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_already_queued); } } @@ -64,10 +64,10 @@ void rxrpc_reduce_call_timer(struct rxrpc_call *call, unsigned long now, enum rxrpc_timer_trace why) { - if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) { + if (rxrpc_try_get_call(call, rxrpc_call_get_timer)) { trace_rxrpc_timer(call, why, now); if (timer_reduce(&call->timer, expire_at)) - rxrpc_put_call(call, rxrpc_call_put_notimer); + rxrpc_put_call(call, rxrpc_call_put_timer_already); } } @@ -110,7 +110,7 @@ struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *rx, return NULL; found_extant_call: - rxrpc_get_call(call, rxrpc_call_got); + rxrpc_get_call(call, rxrpc_call_get_sendmsg); read_unlock(&rx->call_lock); _leave(" = %p [%d]", call, refcount_read(&call->ref)); return call; @@ -270,7 +270,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, struct rxrpc_net *rxnet; struct semaphore *limiter; struct rb_node *parent, **pp; - const void *here = __builtin_return_address(0); int ret; _enter("%p,%lx", rx, p->user_call_ID); @@ -291,9 +290,8 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, call->interruptibility = p->interruptibility; call->tx_total_len = p->tx_total_len; - trace_rxrpc_call(call->debug_id, rxrpc_call_new_client, - refcount_read(&call->ref), - here, (const void *)p->user_call_ID); + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), + p->user_call_ID, rxrpc_call_new_client); if (p->kernel) __set_bit(RXRPC_CALL_KERNEL, &call->flags); @@ -322,7 +320,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, rcu_assign_pointer(call->socket, rx); call->user_call_ID = p->user_call_ID; __set_bit(RXRPC_CALL_HAS_USERID, &call->flags); - rxrpc_get_call(call, rxrpc_call_got_userid); + rxrpc_get_call(call, rxrpc_call_get_userid); rb_link_node(&call->sock_node, parent, pp); rb_insert_color(&call->sock_node, &rx->calls); list_add(&call->sock_link, &rx->sock_calls); @@ -344,8 +342,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, if (ret < 0) goto error_attached_to_socket; - trace_rxrpc_call(call->debug_id, rxrpc_call_connected, - refcount_read(&call->ref), here, NULL); + rxrpc_see_call(call, rxrpc_call_see_connected); rxrpc_start_call_timer(call); @@ -362,11 +359,11 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, release_sock(&rx->sk); __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, RX_CALL_DEAD, -EEXIST); - trace_rxrpc_call(call->debug_id, rxrpc_call_error, - refcount_read(&call->ref), here, ERR_PTR(-EEXIST)); + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), 0, + rxrpc_call_see_userid_exists); rxrpc_release_call(rx, call); mutex_unlock(&call->user_mutex); - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_userid_exists); _leave(" = -EEXIST"); return ERR_PTR(-EEXIST); @@ -376,8 +373,8 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, * leave the error to recvmsg() to deal with. */ error_attached_to_socket: - trace_rxrpc_call(call->debug_id, rxrpc_call_error, - refcount_read(&call->ref), here, ERR_PTR(ret)); + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), ret, + rxrpc_call_see_connect_failed); set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, RX_CALL_DEAD, ret); @@ -428,72 +425,65 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, /* * Queue a call's work processor, getting a ref to pass to the work queue. */ -bool rxrpc_queue_call(struct rxrpc_call *call) +bool rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why) { - const void *here = __builtin_return_address(0); int n; if (!__refcount_inc_not_zero(&call->ref, &n)) return false; if (rxrpc_queue_work(&call->processor)) - trace_rxrpc_call(call->debug_id, rxrpc_call_queued, n + 1, - here, NULL); + trace_rxrpc_call(call->debug_id, n + 1, 0, why); else - rxrpc_put_call(call, rxrpc_call_put_noqueue); + rxrpc_put_call(call, rxrpc_call_put_already_queued); return true; } /* * Queue a call's work processor, passing the callers ref to the work queue. */ -bool __rxrpc_queue_call(struct rxrpc_call *call) +bool __rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why) { - const void *here = __builtin_return_address(0); int n = refcount_read(&call->ref); + ASSERTCMP(n, >=, 1); if (rxrpc_queue_work(&call->processor)) - trace_rxrpc_call(call->debug_id, rxrpc_call_queued_ref, n, - here, NULL); + trace_rxrpc_call(call->debug_id, n, 0, why); else - rxrpc_put_call(call, rxrpc_call_put_noqueue); + rxrpc_put_call(call, rxrpc_call_put_already_queued); return true; } /* * Note the re-emergence of a call. */ -void rxrpc_see_call(struct rxrpc_call *call) +void rxrpc_see_call(struct rxrpc_call *call, enum rxrpc_call_trace why) { - const void *here = __builtin_return_address(0); if (call) { - int n = refcount_read(&call->ref); + int r = refcount_read(&call->ref); - trace_rxrpc_call(call->debug_id, rxrpc_call_seen, n, - here, NULL); + trace_rxrpc_call(call->debug_id, r, 0, why); } } -bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op) +bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace why) { - const void *here = __builtin_return_address(0); - int n; + int r; - if (!__refcount_inc_not_zero(&call->ref, &n)) + if (!__refcount_inc_not_zero(&call->ref, &r)) return false; - trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL); + trace_rxrpc_call(call->debug_id, r + 1, 0, why); return true; } /* * Note the addition of a ref on a call. */ -void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op) +void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace why) { - const void *here = __builtin_return_address(0); - int n; + int r; - __refcount_inc(&call->ref, &n); - trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL); + __refcount_inc(&call->ref, &r); + trace_rxrpc_call(call->debug_id, r + 1, 0, why); } /* @@ -510,15 +500,13 @@ static void rxrpc_cleanup_ring(struct rxrpc_call *call) */ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) { - const void *here = __builtin_return_address(0); struct rxrpc_connection *conn = call->conn; bool put = false; _enter("{%d,%d}", call->debug_id, refcount_read(&call->ref)); - trace_rxrpc_call(call->debug_id, rxrpc_call_release, - refcount_read(&call->ref), - here, (const void *)call->flags); + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), + call->flags, rxrpc_call_see_release); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); @@ -544,14 +532,14 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) write_unlock_bh(&rx->recvmsg_lock); if (put) - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_unnotify); write_lock(&rx->call_lock); if (test_and_clear_bit(RXRPC_CALL_HAS_USERID, &call->flags)) { rb_erase(&call->sock_node, &rx->calls); memset(&call->sock_node, 0xdd, sizeof(call->sock_node)); - rxrpc_put_call(call, rxrpc_call_put_userid); + rxrpc_put_call(call, rxrpc_call_put_userid_exists); } list_del(&call->sock_link); @@ -580,17 +568,17 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) struct rxrpc_call, accept_link); list_del(&call->accept_link); rxrpc_abort_call("SKR", call, 0, RX_CALL_DEAD, -ECONNRESET); - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_release_sock_tba); } while (!list_empty(&rx->sock_calls)) { call = list_entry(rx->sock_calls.next, struct rxrpc_call, sock_link); - rxrpc_get_call(call, rxrpc_call_got); + rxrpc_get_call(call, rxrpc_call_get_release_sock); rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, -ECONNRESET); rxrpc_send_abort_packet(call); rxrpc_release_call(rx, call); - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_release_sock); } _leave(""); @@ -599,20 +587,18 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) /* * release a call */ -void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op) +void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace why) { struct rxrpc_net *rxnet = call->rxnet; - const void *here = __builtin_return_address(0); unsigned int debug_id = call->debug_id; bool dead; - int n; + int r; ASSERT(call != NULL); - dead = __refcount_dec_and_test(&call->ref, &n); - trace_rxrpc_call(debug_id, op, n, here, NULL); + dead = __refcount_dec_and_test(&call->ref, &r); + trace_rxrpc_call(debug_id, r - 1, 0, why); if (dead) { - _debug("call %d dead", call->debug_id); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); if (!list_empty(&call->link)) { @@ -701,7 +687,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet) struct rxrpc_call, link); _debug("Zapping call %p", call); - rxrpc_see_call(call); + rxrpc_see_call(call, rxrpc_call_see_zap); list_del_init(&call->link); pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n", diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index dcfec6a45255..4352e777aa2a 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -540,7 +540,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, clear_bit(RXRPC_CONN_FINAL_ACK_0 + channel, &conn->flags); clear_bit(conn->bundle_shift + channel, &bundle->avail_chans); - rxrpc_see_call(call); + rxrpc_see_call(call, rxrpc_call_see_activate_client); list_del_init(&call->chan_wait_link); call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call); call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index c8ff7489b412..09b44cd11c9b 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -14,7 +14,7 @@ static void rxrpc_proto_abort(const char *why, { if (rxrpc_abort_call(why, call, seq, RX_PROTOCOL_ERROR, -EBADMSG)) { set_bit(RXRPC_CALL_EV_ABORT, &call->events); - rxrpc_queue_call(call); + rxrpc_queue_call(call, rxrpc_call_queue_abort); } } @@ -175,7 +175,7 @@ static void rxrpc_congestion_management(struct rxrpc_call *call, call->cong_cumul_acks = cumulative_acks; trace_rxrpc_congest(call, summary, acked_serial, change); if (resend && !test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events)) - rxrpc_queue_call(call); + rxrpc_queue_call(call, rxrpc_call_queue_resend); return; packet_loss_detected: @@ -678,7 +678,7 @@ static void rxrpc_input_check_for_lost_ack(struct rxrpc_call *call) { if (after(call->acks_lost_top, call->acks_prev_seq) && !test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events)) - rxrpc_queue_call(call); + rxrpc_queue_call(call, rxrpc_call_queue_resend); } /* @@ -1099,7 +1099,7 @@ static void rxrpc_input_implicit_end_call(struct rxrpc_sock *rx, default: if (rxrpc_abort_call("IMP", call, 0, RX_CALL_DEAD, -ESHUTDOWN)) { set_bit(RXRPC_CALL_EV_ABORT, &call->events); - rxrpc_queue_call(call); + rxrpc_queue_call(call, rxrpc_call_queue_abort); } trace_rxrpc_improper_term(call); break; diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 2762b7ada9ae..d324e88f7642 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -310,7 +310,7 @@ void rxrpc_transmit_ack_packets(struct rxrpc_local *local) } list_del_init(&txb->tx_link); - rxrpc_put_call(txb->call, rxrpc_call_put); + rxrpc_put_call(txb->call, rxrpc_call_put_send_ack); rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx); } } diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index 5e97d321ac38..b28739d10927 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -238,7 +238,7 @@ static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error, struct rxrpc_call *call; hlist_for_each_entry_rcu(call, &peer->error_targets, error_link) { - rxrpc_see_call(call); + rxrpc_see_call(call, rxrpc_call_see_distribute_error); rxrpc_set_call_completion(call, compl, 0, -error); } } diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 134122f5961a..c84d2b620396 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -42,7 +42,7 @@ void rxrpc_notify_socket(struct rxrpc_call *call) } else { write_lock_bh(&rx->recvmsg_lock); if (list_empty(&call->recvmsg_link)) { - rxrpc_get_call(call, rxrpc_call_got); + rxrpc_get_call(call, rxrpc_call_get_notify_socket); list_add_tail(&call->recvmsg_link, &rx->recvmsg_q); } write_unlock_bh(&rx->recvmsg_lock); @@ -451,7 +451,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, if (!(flags & MSG_PEEK)) list_del_init(&call->recvmsg_link); else - rxrpc_get_call(call, rxrpc_call_got); + rxrpc_get_call(call, rxrpc_call_get_recvmsg); write_unlock_bh(&rx->recvmsg_lock); trace_rxrpc_recvmsg(call, rxrpc_recvmsg_dequeue, 0); @@ -537,7 +537,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, error_unlock_call: mutex_unlock(&call->user_mutex); - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_recvmsg); trace_rxrpc_recvmsg(call, rxrpc_recvmsg_return, ret); return ret; @@ -548,7 +548,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, write_unlock_bh(&rx->recvmsg_lock); trace_rxrpc_recvmsg(call, rxrpc_recvmsg_requeue, 0); } else { - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_recvmsg); } error_no_call: release_sock(&rx->sk); diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index cfe0badba0b3..76b1e2e89c1e 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -667,7 +667,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) case RXRPC_CALL_CLIENT_AWAIT_CONN: case RXRPC_CALL_SERVER_PREALLOC: case RXRPC_CALL_SERVER_SECURING: - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_sendmsg); ret = -EBUSY; goto error_release_sock; default: @@ -737,7 +737,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) if (!dropped_lock) mutex_unlock(&call->user_mutex); error_put: - rxrpc_put_call(call, rxrpc_call_put); + rxrpc_put_call(call, rxrpc_call_put_sendmsg); _leave(" = %d", ret); return ret; From patchwork Wed Nov 30 16:56:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27887 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1038940wrr; Wed, 30 Nov 2022 09:00:33 -0800 (PST) X-Google-Smtp-Source: AA0mqf7Yui8JTWaepd0cY57YB9ADXKh0O9hZHlir6EokAvetvd7BzzfM/uHPhQsRAaftbzYokG1o X-Received: by 2002:a62:168f:0:b0:574:80c6:7106 with SMTP id 137-20020a62168f000000b0057480c67106mr33189181pfw.23.1669827632606; Wed, 30 Nov 2022 09:00:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827632; cv=none; d=google.com; s=arc-20160816; b=C/zWhG3UcvAZBO1Mhm1dxB8f19qocnZkTCIJlcWayXVr78BgC2ewqRsQyEW+bucYSO Xf0SqxMR0g+3oW5j7oBTG60WypfGjxP5VLERkgeJHy8voVEeqV9YQoT8i87y9uxqy99t 51FaifLmZqy0x1v8dFPyhpO2V46BVuYMZM7p7uOMBAp88uF+ikKqy7fgaQwhY1gqf0HN +4iYvPldEn0oCNoDNPsbEe4VL0K+JSVcSXqJSCS5v5TTVfNUvc+79I1SQ0Iebc1+Lh+2 dDrOUP/pnwldfh+w4t/nAQURO7FUCXxlLEp3EclJqSmZ8WtcPlbd5ai+HwPO7fQfVwt+ PV2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=HRrmPOQXGyQvrE192VqLxp1giKxs1t8Q4mImd844U28=; b=ucOfI8u9KV3aHCvVfZgGlMWKbsuXUC5oZJwtjIjG+1cdok1bjXZQyQwuPQCfLk7tZS gNV0s8d05800416wKgclGKwcJ22UgMzTfv1+hfBSZufFHn/oJnEsgLFuGGNkELynAhLc GVt55GeDV8by4OwhaDMHiKQwJaCHVxRDOK2JR1PXqeZ96Im1rsWrqXWyKb2LkLoW/SxN k2cP2gwIQxHV+czIeGnp8ag6IRS+fIEx8m3dnaCl8X6BgwUOV/KDxA7SYjGoUX6MzHhP ayqLpo0ocP6RuF2LvmH+wnkmHkGVlOUernoCdibQND4XiIxppKPhi/rKaCsYai7T++JM 9q2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Q5+nrCOh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a4-20020a056a000c8400b00571a5c5eaa5si1217188pfv.150.2022.11.30.09.00.19; Wed, 30 Nov 2022 09:00:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Q5+nrCOh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230330AbiK3Q6b (ORCPT + 99 others); Wed, 30 Nov 2022 11:58:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229689AbiK3Q5X (ORCPT ); Wed, 30 Nov 2022 11:57:23 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC23391C2B for ; Wed, 30 Nov 2022 08:56:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827380; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HRrmPOQXGyQvrE192VqLxp1giKxs1t8Q4mImd844U28=; b=Q5+nrCOhsOCwlpmMa/CmQGG5Hiwx01hSIjCfGsyYYwYRUodh4kN/P6XFhYd762t4Yu9wlr yH6A8UmXoaUgfn6f8IDkK6AN3IPkMyyPJbPSQd3Z9aabtA6/aSah9yLw9HBTe68G35ybkQ CLhNku6JksUsfDjC+j+6V0Wda5FHeOA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-389-z94zSuQCMJiBNsyLzHrzdQ-1; Wed, 30 Nov 2022 11:56:14 -0500 X-MC-Unique: z94zSuQCMJiBNsyLzHrzdQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1F9D7882822; Wed, 30 Nov 2022 16:56:14 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 602237AE5; Wed, 30 Nov 2022 16:56:13 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 13/35] rxrpc: Trace rxrpc_bundle refcount From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:10 +0000 Message-ID: <166982737062.621383.16260737246000488854.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941179383953479?= X-GMAIL-MSGID: =?utf-8?q?1750941179383953479?= Add a tracepoint for the rxrpc_bundle refcounting. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 34 ++++++++++++++++++++++++++++++++++ net/rxrpc/ar-internal.h | 4 ++-- net/rxrpc/conn_client.c | 27 ++++++++++++++++----------- net/rxrpc/conn_object.c | 2 +- net/rxrpc/conn_service.c | 3 ++- 5 files changed, 55 insertions(+), 15 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 3f6de4294148..6f5be7ac7f6b 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -81,6 +81,15 @@ EM(rxrpc_peer_put_input_error, "PUT inpt-err") \ E_(rxrpc_peer_put_keepalive, "PUT keepaliv") +#define rxrpc_bundle_traces \ + EM(rxrpc_bundle_free, "FREE ") \ + EM(rxrpc_bundle_get_client_call, "GET clt-call") \ + EM(rxrpc_bundle_get_client_conn, "GET clt-conn") \ + EM(rxrpc_bundle_get_service_conn, "GET svc-conn") \ + EM(rxrpc_bundle_put_conn, "PUT conn ") \ + EM(rxrpc_bundle_put_discard, "PUT discard ") \ + E_(rxrpc_bundle_new, "NEW ") + #define rxrpc_conn_traces \ EM(rxrpc_conn_free, "FREE ") \ EM(rxrpc_conn_get_activate_call, "GET act-call") \ @@ -361,6 +370,7 @@ #define EM(a, b) a, #define E_(a, b) a +enum rxrpc_bundle_trace { rxrpc_bundle_traces } __mode(byte); enum rxrpc_call_trace { rxrpc_call_traces } __mode(byte); enum rxrpc_client_trace { rxrpc_client_traces } __mode(byte); enum rxrpc_congest_change { rxrpc_congest_changes } __mode(byte); @@ -390,6 +400,7 @@ enum rxrpc_txqueue_trace { rxrpc_txqueue_traces } __mode(byte); #define EM(a, b) TRACE_DEFINE_ENUM(a); #define E_(a, b) TRACE_DEFINE_ENUM(a); +rxrpc_bundle_traces; rxrpc_call_traces; rxrpc_client_traces; rxrpc_congest_changes; @@ -467,6 +478,29 @@ TRACE_EVENT(rxrpc_peer, __entry->ref) ); +TRACE_EVENT(rxrpc_bundle, + TP_PROTO(unsigned int bundle_debug_id, int ref, enum rxrpc_bundle_trace why), + + TP_ARGS(bundle_debug_id, ref, why), + + TP_STRUCT__entry( + __field(unsigned int, bundle ) + __field(int, ref ) + __field(int, why ) + ), + + TP_fast_assign( + __entry->bundle = bundle_debug_id; + __entry->ref = ref; + __entry->why = why; + ), + + TP_printk("CB=%08x %s r=%d", + __entry->bundle, + __print_symbolic(__entry->why, rxrpc_bundle_traces), + __entry->ref) + ); + TRACE_EVENT(rxrpc_conn, TP_PROTO(unsigned int conn_debug_id, int ref, enum rxrpc_conn_trace why), diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 82eb09b961a0..c588c0e81f63 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -875,8 +875,8 @@ extern unsigned long rxrpc_conn_idle_client_fast_expiry; extern struct idr rxrpc_client_conn_ids; void rxrpc_destroy_client_conn_ids(void); -struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *); -void rxrpc_put_bundle(struct rxrpc_bundle *); +struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); +void rxrpc_put_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); int rxrpc_connect_call(struct rxrpc_sock *, struct rxrpc_call *, struct rxrpc_conn_parameters *, struct sockaddr_rxrpc *, gfp_t); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 4352e777aa2a..34ff6fa85c32 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -133,31 +133,36 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp, atomic_set(&bundle->active, 1); spin_lock_init(&bundle->channel_lock); INIT_LIST_HEAD(&bundle->waiting_calls); + trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_new); } return bundle; } -struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle) +struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle, + enum rxrpc_bundle_trace why) { - refcount_inc(&bundle->ref); + int r; + + __refcount_inc(&bundle->ref, &r); + trace_rxrpc_bundle(bundle->debug_id, r + 1, why); return bundle; } static void rxrpc_free_bundle(struct rxrpc_bundle *bundle) { + trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_free); rxrpc_put_peer(bundle->peer, rxrpc_peer_put_bundle); kfree(bundle); } -void rxrpc_put_bundle(struct rxrpc_bundle *bundle) +void rxrpc_put_bundle(struct rxrpc_bundle *bundle, enum rxrpc_bundle_trace why) { - unsigned int d = bundle->debug_id; + unsigned int id = bundle->debug_id; bool dead; int r; dead = __refcount_dec_and_test(&bundle->ref, &r); - - _debug("PUT B=%x %d", d, r - 1); + trace_rxrpc_bundle(id, r - 1, why); if (dead) rxrpc_free_bundle(bundle); } @@ -206,7 +211,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) list_add_tail(&conn->proc_link, &rxnet->conn_proc_list); write_unlock(&rxnet->conn_lock); - rxrpc_get_bundle(bundle); + rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_conn); rxrpc_get_peer(conn->peer, rxrpc_peer_get_client_conn); rxrpc_get_local(conn->local, rxrpc_local_get_client_conn); key_get(conn->key); @@ -342,7 +347,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c candidate->debug_id = atomic_inc_return(&rxrpc_bundle_id); rb_link_node(&candidate->local_node, parent, pp); rb_insert_color(&candidate->local_node, &local->client_bundles); - rxrpc_get_bundle(candidate); + rxrpc_get_bundle(candidate, rxrpc_bundle_get_client_call); spin_unlock(&local->client_bundles_lock); _leave(" = %u [new]", candidate->debug_id); return candidate; @@ -350,7 +355,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c found_bundle_free: rxrpc_free_bundle(candidate); found_bundle: - rxrpc_get_bundle(bundle); + rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_call); atomic_inc(&bundle->active); spin_unlock(&local->client_bundles_lock); _leave(" = %u [found]", bundle->debug_id); @@ -740,7 +745,7 @@ int rxrpc_connect_call(struct rxrpc_sock *rx, out_put_bundle: rxrpc_deactivate_bundle(bundle); - rxrpc_put_bundle(bundle); + rxrpc_put_bundle(bundle, rxrpc_bundle_get_client_call); out: _leave(" = %d", ret); return ret; @@ -958,7 +963,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) spin_unlock(&local->client_bundles_lock); if (need_put) - rxrpc_put_bundle(bundle); + rxrpc_put_bundle(bundle, rxrpc_bundle_put_discard); } } diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index bbace8d9953d..f7c271a740ed 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -363,7 +363,7 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu) conn->security->clear(conn); key_put(conn->key); - rxrpc_put_bundle(conn->bundle); + rxrpc_put_bundle(conn->bundle, rxrpc_bundle_put_conn); rxrpc_put_peer(conn->peer, rxrpc_peer_put_conn); if (atomic_dec_and_test(&conn->local->rxnet->nr_conns)) diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index bf087213bd4d..2c44d67b43dc 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -133,7 +133,8 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn */ conn->state = RXRPC_CONN_SERVICE_PREALLOC; refcount_set(&conn->ref, 2); - conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle); + conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle, + rxrpc_bundle_get_service_conn); atomic_inc(&rxnet->nr_conns); write_lock(&rxnet->conn_lock); From patchwork Wed Nov 30 16:56:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27888 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1039173wrr; Wed, 30 Nov 2022 09:00:54 -0800 (PST) X-Google-Smtp-Source: AA0mqf4Wp/WlPEFJ+F+R3hAIrShTvgYyLnBEdHgahyPy57cS2Xy4nFnSciaPo/xuN7PEL5DT010F X-Received: by 2002:a17:906:1f56:b0:7ad:f6ef:e6c8 with SMTP id d22-20020a1709061f5600b007adf6efe6c8mr37385626ejk.541.1669827654380; Wed, 30 Nov 2022 09:00:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827654; cv=none; d=google.com; s=arc-20160816; b=rKAIc8bGKJ8NXcpV2OWyNRmXXD+iqULAbZXj1JmrPBcFYeZxpK8lgh+ai8KbLCwo8P 1pdjduKRPfPPYeBZ8LMfqtN4wQpoLrpoGPilbp0sU2oDxanqT20tNpysISqQcxXgbLN0 49aDh2y+s+l4wexY+l58mItzFvquSJTG0AKqh87YxaPNbRR7t15Lj4KyUnoThxbPeTMh 7+uRQnsU5F426g1OISpzqhmfhMoBuTpeNDX7JAggLMoXW066i3pDadlguxHI+kibR+gk hq3LcBKbzH94KHywkA6xKBekZZXlIV4fkO9MLM8LuA4PfDKcleeHydwVV61z/rKDxKVR hsCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=Zed4TskVzZBf0Lp0+EduaaL2oGYwYd/Zz6NwOYGIqxk=; b=tv4gyQgbuItny4r6+nVJENENuTnalrLk+lsex3PbR3OD9+1pf5sm+87NDIj24Jv3OV g2pZEujRcrbeyWWvY6EZxAPA8rltWL+HoMbN+rOeknV4H0Qp3cDjjtDUzHsfKuMaOF7x W2Q5wR8sjp85J+vPyeA9AbNhlLXJ/zqnE21N+Qg6ipnPstwI+DGH433mFDxCyKdskt4G vOzLV5syO+CcfGcVCEPzLMg7w0BFrZeqwn1p8yq/xrpVWvDOKTNMgJKRL4M6k/f+pUCC BIs5hsaZNDAZUQoHLUziYSp/scZs/1fGX3HZUKZ/73fGfeoHBIw3WuOOrNPqQj0EcuwD 0p3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BeCn3uaF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ga33-20020a1709070c2100b00769d94690fesi1872218ejc.326.2022.11.30.09.00.28; Wed, 30 Nov 2022 09:00:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BeCn3uaF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230429AbiK3Q6s (ORCPT + 99 others); Wed, 30 Nov 2022 11:58:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230449AbiK3Q5r (ORCPT ); Wed, 30 Nov 2022 11:57:47 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A5D091C30 for ; Wed, 30 Nov 2022 08:56:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827386; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zed4TskVzZBf0Lp0+EduaaL2oGYwYd/Zz6NwOYGIqxk=; b=BeCn3uaFsWGrjlxZGyBmrVDNz/b8xV7aVOlrPdsuyfBd1bWgLO46BL2spsaC+iOZPx4fcL UzwMNPocVdWzOx3eNH86/AkrX8vRx5fyN8zVDMdhqQzQGw0j75mSlmU+yKa22Cs6ZHbjou Hm8Iamcixg8LHCZWpXSeEcd8ThzDQLA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-307--g7lwgJZNYmAfZ9Dh8RdVQ-1; Wed, 30 Nov 2022 11:56:23 -0500 X-MC-Unique: -g7lwgJZNYmAfZ9Dh8RdVQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D9C37185A7A9; Wed, 30 Nov 2022 16:56:22 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0C8842166B26; Wed, 30 Nov 2022 16:56:21 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 14/35] rxrpc: trace: Don't use __builtin_return_address for sk_buff tracing From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:19 +0000 Message-ID: <166982737927.621383.1527931764497198598.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941202309227051?= X-GMAIL-MSGID: =?utf-8?q?1750941202309227051?= In rxrpc tracing, use enums to generate lists of points of interest rather than __builtin_return_address() for the sk_buff tracepoint. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 57 ++++++++++++++++++++++++------------------ net/rxrpc/call_event.c | 4 +-- net/rxrpc/call_object.c | 2 + net/rxrpc/conn_event.c | 6 ++-- net/rxrpc/input.c | 36 +++++++++++++-------------- net/rxrpc/local_event.c | 4 +-- net/rxrpc/output.c | 6 ++-- net/rxrpc/peer_event.c | 8 +++--- net/rxrpc/recvmsg.c | 6 ++-- net/rxrpc/skbuff.c | 36 +++++++++++---------------- 10 files changed, 84 insertions(+), 81 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 6f5be7ac7f6b..5a2292baffc8 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -17,19 +17,31 @@ * Declare tracing information enums and their string mappings for display. */ #define rxrpc_skb_traces \ - EM(rxrpc_skb_ack, "ACK") \ - EM(rxrpc_skb_cleaned, "CLN") \ - EM(rxrpc_skb_cloned_jumbo, "CLJ") \ - EM(rxrpc_skb_freed, "FRE") \ - EM(rxrpc_skb_got, "GOT") \ - EM(rxrpc_skb_lost, "*L*") \ - EM(rxrpc_skb_new, "NEW") \ - EM(rxrpc_skb_purged, "PUR") \ - EM(rxrpc_skb_received, "RCV") \ - EM(rxrpc_skb_rotated, "ROT") \ - EM(rxrpc_skb_seen, "SEE") \ - EM(rxrpc_skb_unshared, "UNS") \ - E_(rxrpc_skb_unshared_nomem, "US0") + EM(rxrpc_skb_eaten_by_unshare, "ETN unshare ") \ + EM(rxrpc_skb_eaten_by_unshare_nomem, "ETN unshar-nm") \ + EM(rxrpc_skb_get_ack, "GET ack ") \ + EM(rxrpc_skb_get_conn_work, "GET conn-work") \ + EM(rxrpc_skb_get_to_recvmsg, "GET to-recv ") \ + EM(rxrpc_skb_get_to_recvmsg_oos, "GET to-recv-o") \ + EM(rxrpc_skb_new_encap_rcv, "NEW encap-rcv") \ + EM(rxrpc_skb_new_error_report, "NEW error-rpt") \ + EM(rxrpc_skb_new_jumbo_subpacket, "NEW jumbo-sub") \ + EM(rxrpc_skb_new_unshared, "NEW unshared ") \ + EM(rxrpc_skb_put_ack, "PUT ack ") \ + EM(rxrpc_skb_put_conn_work, "PUT conn-work") \ + EM(rxrpc_skb_put_error_report, "PUT error-rep") \ + EM(rxrpc_skb_put_input, "PUT input ") \ + EM(rxrpc_skb_put_jumbo_subpacket, "PUT jumbo-sub") \ + EM(rxrpc_skb_put_lose, "PUT lose ") \ + EM(rxrpc_skb_put_purge, "PUT purge ") \ + EM(rxrpc_skb_put_rotate, "PUT rotate ") \ + EM(rxrpc_skb_put_unknown, "PUT unknown ") \ + EM(rxrpc_skb_see_conn_work, "SEE conn-work") \ + EM(rxrpc_skb_see_local_work, "SEE locl-work") \ + EM(rxrpc_skb_see_recvmsg, "SEE recvmsg ") \ + EM(rxrpc_skb_see_reject, "SEE reject ") \ + EM(rxrpc_skb_see_rotate, "SEE rotate ") \ + E_(rxrpc_skb_see_version, "SEE version ") #define rxrpc_local_traces \ EM(rxrpc_local_free, "FREE ") \ @@ -582,33 +594,30 @@ TRACE_EVENT(rxrpc_call, ); TRACE_EVENT(rxrpc_skb, - TP_PROTO(struct sk_buff *skb, enum rxrpc_skb_trace op, - int usage, int mod_count, const void *where), + TP_PROTO(struct sk_buff *skb, int usage, int mod_count, + enum rxrpc_skb_trace why), - TP_ARGS(skb, op, usage, mod_count, where), + TP_ARGS(skb, usage, mod_count, why), TP_STRUCT__entry( __field(struct sk_buff *, skb ) - __field(enum rxrpc_skb_trace, op ) __field(int, usage ) __field(int, mod_count ) - __field(const void *, where ) + __field(enum rxrpc_skb_trace, why ) ), TP_fast_assign( __entry->skb = skb; - __entry->op = op; __entry->usage = usage; __entry->mod_count = mod_count; - __entry->where = where; + __entry->why = why; ), - TP_printk("s=%p Rx %s u=%d m=%d p=%pSR", + TP_printk("s=%p Rx %s u=%d m=%d", __entry->skb, - __print_symbolic(__entry->op, rxrpc_skb_traces), + __print_symbolic(__entry->why, rxrpc_skb_traces), __entry->usage, - __entry->mod_count, - __entry->where) + __entry->mod_count) ); TRACE_EVENT(rxrpc_rx_packet, diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 0c8d2186cda8..29ca02e53c47 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -153,7 +153,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) spin_lock_bh(&call->acks_ack_lock); ack_skb = call->acks_soft_tbl; if (ack_skb) { - rxrpc_get_skb(ack_skb, rxrpc_skb_ack); + rxrpc_get_skb(ack_skb, rxrpc_skb_get_ack); ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header); } spin_unlock_bh(&call->acks_ack_lock); @@ -251,7 +251,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) no_further_resend: spin_unlock(&call->tx_lock); no_resend: - rxrpc_free_skb(ack_skb, rxrpc_skb_freed); + rxrpc_free_skb(ack_skb, rxrpc_skb_put_ack); resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest))); resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index afd957f6dc1c..815209673115 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -663,7 +663,7 @@ void rxrpc_cleanup_call(struct rxrpc_call *call) rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned); } rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned); - rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_cleaned); + rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_put_ack); call_rcu(&call->rcu, rxrpc_rcu_destroy_call); } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 817f895c77ca..49d885f73fa5 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -437,7 +437,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) /* go through the conn-level event packets, releasing the ref on this * connection that each one has when we've finished with it */ while ((skb = skb_dequeue(&conn->rx_queue))) { - rxrpc_see_skb(skb, rxrpc_skb_seen); + rxrpc_see_skb(skb, rxrpc_skb_see_conn_work); ret = rxrpc_process_event(conn, skb, &abort_code); switch (ret) { case -EPROTO: @@ -449,7 +449,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) goto requeue_and_leave; case -ECONNABORTED: default: - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_conn_work); break; } } @@ -463,7 +463,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) protocol_error: if (rxrpc_abort_connection(conn, ret, abort_code) < 0) goto requeue_and_leave; - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_conn_work); return; } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 09b44cd11c9b..ab8b7a1be935 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -485,7 +485,7 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_propose_ack_input_data); err_free: - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); } /* @@ -513,7 +513,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb kdebug("couldn't clone"); return false; } - rxrpc_new_skb(jskb, rxrpc_skb_cloned_jumbo); + rxrpc_new_skb(jskb, rxrpc_skb_new_jumbo_subpacket); jsp = rxrpc_skb(jskb); jsp->offset = offset; jsp->len = RXRPC_JUMBO_DATALEN; @@ -553,7 +553,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) state = READ_ONCE(call->state); if (state >= RXRPC_CALL_COMPLETE) { - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); return; } @@ -563,14 +563,14 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) if (sp->hdr.securityIndex != 0) { struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC); if (!nskb) { - rxrpc_eaten_skb(skb, rxrpc_skb_unshared_nomem); + rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem); return; } if (nskb != skb) { - rxrpc_eaten_skb(skb, rxrpc_skb_received); + rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare); skb = nskb; - rxrpc_new_skb(skb, rxrpc_skb_unshared); + rxrpc_new_skb(skb, rxrpc_skb_new_unshared); sp = rxrpc_skb(skb); } } @@ -609,7 +609,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_notify_socket(call); spin_unlock(&call->input_lock); - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); _leave(" [queued]"); } @@ -994,8 +994,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) out: spin_unlock(&call->input_lock); out_not_locked: - rxrpc_free_skb(skb_put, rxrpc_skb_freed); - rxrpc_free_skb(skb_old, rxrpc_skb_freed); + rxrpc_free_skb(skb_put, rxrpc_skb_put_input); + rxrpc_free_skb(skb_old, rxrpc_skb_put_ack); } /* @@ -1075,7 +1075,7 @@ static void rxrpc_input_call_packet(struct rxrpc_call *call, break; } - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); no_free: _leave(""); } @@ -1137,7 +1137,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local, skb_queue_tail(&local->event_queue, skb); rxrpc_queue_local(local); } else { - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); } } @@ -1150,7 +1150,7 @@ static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) skb_queue_tail(&local->reject_queue, skb); rxrpc_queue_local(local); } else { - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); } } @@ -1228,7 +1228,7 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) if (skb->tstamp == 0) skb->tstamp = ktime_get_real(); - rxrpc_new_skb(skb, rxrpc_skb_received); + rxrpc_new_skb(skb, rxrpc_skb_new_encap_rcv); skb_pull(skb, sizeof(struct udphdr)); @@ -1245,7 +1245,7 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) static int lose; if ((lose++ & 7) == 7) { trace_rxrpc_rx_lose(sp); - rxrpc_free_skb(skb, rxrpc_skb_lost); + rxrpc_free_skb(skb, rxrpc_skb_put_lose); return 0; } } @@ -1286,14 +1286,14 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) if (sp->hdr.securityIndex != 0) { struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC); if (!nskb) { - rxrpc_eaten_skb(skb, rxrpc_skb_unshared_nomem); + rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem); goto out; } if (nskb != skb) { - rxrpc_eaten_skb(skb, rxrpc_skb_received); + rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare); skb = nskb; - rxrpc_new_skb(skb, rxrpc_skb_unshared); + rxrpc_new_skb(skb, rxrpc_skb_new_unshared); sp = rxrpc_skb(skb); } } @@ -1434,7 +1434,7 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) goto out; discard: - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); out: trace_rxrpc_rx_done(0, 0); return 0; diff --git a/net/rxrpc/local_event.c b/net/rxrpc/local_event.c index f23a3fbabbda..c344383a20b2 100644 --- a/net/rxrpc/local_event.c +++ b/net/rxrpc/local_event.c @@ -88,7 +88,7 @@ void rxrpc_process_local_events(struct rxrpc_local *local) if (skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - rxrpc_see_skb(skb, rxrpc_skb_seen); + rxrpc_see_skb(skb, rxrpc_skb_see_local_work); _debug("{%d},{%u}", local->debug_id, sp->hdr.type); switch (sp->hdr.type) { @@ -105,7 +105,7 @@ void rxrpc_process_local_events(struct rxrpc_local *local) break; } - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); } _leave(""); diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index d324e88f7642..131c7a76fb06 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -615,7 +615,7 @@ void rxrpc_reject_packets(struct rxrpc_local *local) memset(&whdr, 0, sizeof(whdr)); while ((skb = skb_dequeue(&local->reject_queue))) { - rxrpc_see_skb(skb, rxrpc_skb_seen); + rxrpc_see_skb(skb, rxrpc_skb_see_reject); sp = rxrpc_skb(skb); switch (skb->mark) { @@ -631,7 +631,7 @@ void rxrpc_reject_packets(struct rxrpc_local *local) ioc = 2; break; default: - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); continue; } @@ -656,7 +656,7 @@ void rxrpc_reject_packets(struct rxrpc_local *local) rxrpc_tx_point_reject); } - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_input); } _leave(""); diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index b28739d10927..f35cfc458dcf 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -158,12 +158,12 @@ void rxrpc_error_report(struct sock *sk) _leave("UDP socket errqueue empty"); return; } - rxrpc_new_skb(skb, rxrpc_skb_received); + rxrpc_new_skb(skb, rxrpc_skb_new_error_report); serr = SKB_EXT_ERR(skb); if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) { _leave("UDP empty message"); rcu_read_unlock(); - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_error_report); return; } @@ -172,7 +172,7 @@ void rxrpc_error_report(struct sock *sk) peer = NULL; if (!peer) { rcu_read_unlock(); - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_error_report); _leave(" [no peer]"); return; } @@ -189,7 +189,7 @@ void rxrpc_error_report(struct sock *sk) rxrpc_store_error(peer, serr); out: rcu_read_unlock(); - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_error_report); rxrpc_put_peer(peer, rxrpc_peer_put_input_error); _leave(""); diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index c84d2b620396..bfac9e09347e 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -229,7 +229,7 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) _enter("%d", call->debug_id); skb = skb_dequeue(&call->recvmsg_queue); - rxrpc_see_skb(skb, rxrpc_skb_rotated); + rxrpc_see_skb(skb, rxrpc_skb_see_rotate); sp = rxrpc_skb(skb); tseq = sp->hdr.seq; @@ -240,7 +240,7 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) if (after(tseq, call->rx_consumed)) smp_store_release(&call->rx_consumed, tseq); - rxrpc_free_skb(skb, rxrpc_skb_freed); + rxrpc_free_skb(skb, rxrpc_skb_put_rotate); trace_rxrpc_receive(call, last ? rxrpc_receive_rotate_last : rxrpc_receive_rotate, serial, call->rx_consumed); @@ -302,7 +302,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, */ skb = skb_peek(&call->recvmsg_queue); while (skb) { - rxrpc_see_skb(skb, rxrpc_skb_seen); + rxrpc_see_skb(skb, rxrpc_skb_see_recvmsg); sp = rxrpc_skb(skb); seq = sp->hdr.seq; diff --git a/net/rxrpc/skbuff.c b/net/rxrpc/skbuff.c index 0c827d5bb2b8..ebe0c75e7b07 100644 --- a/net/rxrpc/skbuff.c +++ b/net/rxrpc/skbuff.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0-or-later -/* ar-skbuff.c: socket buffer destruction handling +/* Socket buffer accounting * * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) @@ -19,56 +19,50 @@ /* * Note the allocation or reception of a socket buffer. */ -void rxrpc_new_skb(struct sk_buff *skb, enum rxrpc_skb_trace op) +void rxrpc_new_skb(struct sk_buff *skb, enum rxrpc_skb_trace why) { - const void *here = __builtin_return_address(0); int n = atomic_inc_return(select_skb_count(skb)); - trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here); + trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why); } /* * Note the re-emergence of a socket buffer from a queue or buffer. */ -void rxrpc_see_skb(struct sk_buff *skb, enum rxrpc_skb_trace op) +void rxrpc_see_skb(struct sk_buff *skb, enum rxrpc_skb_trace why) { - const void *here = __builtin_return_address(0); if (skb) { int n = atomic_read(select_skb_count(skb)); - trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here); + trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why); } } /* * Note the addition of a ref on a socket buffer. */ -void rxrpc_get_skb(struct sk_buff *skb, enum rxrpc_skb_trace op) +void rxrpc_get_skb(struct sk_buff *skb, enum rxrpc_skb_trace why) { - const void *here = __builtin_return_address(0); int n = atomic_inc_return(select_skb_count(skb)); - trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here); + trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why); skb_get(skb); } /* * Note the dropping of a ref on a socket buffer by the core. */ -void rxrpc_eaten_skb(struct sk_buff *skb, enum rxrpc_skb_trace op) +void rxrpc_eaten_skb(struct sk_buff *skb, enum rxrpc_skb_trace why) { - const void *here = __builtin_return_address(0); int n = atomic_inc_return(&rxrpc_n_rx_skbs); - trace_rxrpc_skb(skb, op, 0, n, here); + trace_rxrpc_skb(skb, 0, n, why); } /* * Note the destruction of a socket buffer. */ -void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace op) +void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace why) { - const void *here = __builtin_return_address(0); if (skb) { - int n; - n = atomic_dec_return(select_skb_count(skb)); - trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here); + int n = atomic_dec_return(select_skb_count(skb)); + trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why); kfree_skb(skb); } } @@ -78,12 +72,12 @@ void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace op) */ void rxrpc_purge_queue(struct sk_buff_head *list) { - const void *here = __builtin_return_address(0); struct sk_buff *skb; + while ((skb = skb_dequeue((list))) != NULL) { int n = atomic_dec_return(select_skb_count(skb)); - trace_rxrpc_skb(skb, rxrpc_skb_purged, - refcount_read(&skb->users), n, here); + trace_rxrpc_skb(skb, refcount_read(&skb->users), n, + rxrpc_skb_put_purge); kfree_skb(skb); } } From patchwork Wed Nov 30 16:56:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27892 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1040311wrr; Wed, 30 Nov 2022 09:02:27 -0800 (PST) X-Google-Smtp-Source: AA0mqf74+RYDddsmqoqddxY3FjIrT37CdNvvydiSzVM92z42bQy9a+wD07L6pqFMdpwXCTirz9X4 X-Received: by 2002:a05:6402:321e:b0:469:ebc0:2247 with SMTP id g30-20020a056402321e00b00469ebc02247mr37013019eda.217.1669827747330; Wed, 30 Nov 2022 09:02:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827747; cv=none; d=google.com; s=arc-20160816; b=n4dqC0IunX/5iE4NnlWs96BA0Nd4zUnsJ8EV1xi4zgnbNu4Y72ysh/7L/nn6Jfla9m 9yc09/bAWAv4dA1A0GmuwVNLvENbQuS4KGj2fVdNrKnkxz7wWMa/4Ho+UAmsOnwepMJQ QV9EYjSRPJu9qHxvFAsOCKX6KsKVM5uXKzOa15ui4m3vuCp1qsH3zjR/hyz0uAcXGmgC TXP+bMG4VQ6TuVQBPPcvO9LaXdHuRj7fhHJ94TffGfn7xtj59mWa+k7erPFA90/Z+YaA p/TNGrY0Hv4W9kii37Lxq56KNBWUj7i3aBmF4AxhND5SnO3vIlrIXTFr47Jvh7vUKQQD GW3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=e+gEyqHAYcukbUT7EiIRiNqkNampnQzNPJX4nGRXEqo=; b=GS2O2HobqLIRR2+9yx6bv3tvvwUXjb7D5Fiom9fW4ULSGYmA5Qy08hzykE3CbtYQfZ tn0zqWly0+jz5+Hicz6hivf3cLEgUjaZYSbYYC84sRFSjhOcy75dDOwO2ra0UO4rPGxO JuozlLLu+4mbfYDZ5tV1aFg3ngdgKc2/Be/1iBhBQwm6MNltkfDGF7uEmsrkQeX6BfXU FmtROmKFPdiIk9NJpCTUzJX7XpvN8ADOITTQppArVqRqpQeK1KbYlvrKCOPBWLqgqwsf X7vC/XwM6nJ6q2iVyxz3ujUwdsxDgHB7dog1C3nBth5qRDUVd1Fz72qEo1STzRUx3IKG LYKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=JCjXNLih; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gz11-20020a170907a04b00b0078e0e866a4dsi1248067ejc.682.2022.11.30.09.02.02; Wed, 30 Nov 2022 09:02:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=JCjXNLih; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230457AbiK3Q7C (ORCPT + 99 others); Wed, 30 Nov 2022 11:59:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230460AbiK3Q6E (ORCPT ); Wed, 30 Nov 2022 11:58:04 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7747920B4 for ; Wed, 30 Nov 2022 08:56:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827397; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e+gEyqHAYcukbUT7EiIRiNqkNampnQzNPJX4nGRXEqo=; b=JCjXNLihgqhXYcylpq7l6/7P2aBk2XCoLvjb/7gcdgLJsMjKLuOmDd3225Ewyw3FPjUtl0 bzPAO9k8uTn5Ff3xJkl8RdL0BSpR3CDWfpqbbr9IciWQpX/e16RSgTv7rWC6RBB2j6Vike RNoxfEey+yx+3ZIdtbh8iksuQ/WzIDY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-124-IDoIzPOSMx2_wBGt6iXhwA-1; Wed, 30 Nov 2022 11:56:32 -0500 X-MC-Unique: IDoIzPOSMx2_wBGt6iXhwA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A234C29DD99B; Wed, 30 Nov 2022 16:56:31 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id C85A62024CBE; Wed, 30 Nov 2022 16:56:30 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 15/35] rxrpc: Don't hold a ref for call timer or workqueue From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:28 +0000 Message-ID: <166982738804.621383.17391941989068713587.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941299776166290?= X-GMAIL-MSGID: =?utf-8?q?1750941299776166290?= Currently, rxrpc gives the call timer a ref on the call when it starts it and this is passed along to the workqueue by the timer expiration function. The problem comes when queue_work() fails (ie. the work item is already queued): the timer routine must put the ref - but this may cause the cleanup code to run. This has the unfortunate effect that the cleanup code may then be run in softirq context - which means that any spinlocks it might need to touch have to be guarded to disable softirqs (ie. they need a "_bh" suffix). Fix this by: (1) Don't give a ref to the timer. (2) Making the expiration function not do anything if the refcount is 0. Note that this is more of an optimisation. (3) Make sure that the cleanup routine waits for timer to complete. However, this has a consequence that timer cannot give a ref to the work item. Therefore the following fixes are also necessary: (4) Don't give a ref to the work item. (5) Make the work item return asap if it sees the ref count is 0. (6) Make sure that the cleanup routine waits for the work item to complete. Unfortunately, neither the timer nor the work item can simply get around the problem by just using refcount_inc_not_zero() as the waits would still have to be done, and there would still be the possibility of having to put the ref in the expiration function. Note the call work item is going to go away with the work being transferred to the I/O thread, so the wait in (6) will become obsolete. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 6 -- net/rxrpc/ar-internal.h | 6 +- net/rxrpc/call_event.c | 11 ++-- net/rxrpc/call_object.c | 111 ++++++++++++++++-------------------------- net/rxrpc/txbuf.c | 2 + 5 files changed, 52 insertions(+), 84 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 5a2292baffc8..4538de0079a5 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -155,11 +155,9 @@ EM(rxrpc_call_get_release_sock, "GET rel-sock") \ EM(rxrpc_call_get_sendmsg, "GET sendmsg ") \ EM(rxrpc_call_get_send_ack, "GET send-ack") \ - EM(rxrpc_call_get_timer, "GET timer ") \ EM(rxrpc_call_get_userid, "GET user-id ") \ EM(rxrpc_call_new_client, "NEW client ") \ EM(rxrpc_call_new_prealloc_service, "NEW prealloc") \ - EM(rxrpc_call_put_already_queued, "PUT alreadyq") \ EM(rxrpc_call_put_discard_prealloc, "PUT disc-pre") \ EM(rxrpc_call_put_input, "PUT input ") \ EM(rxrpc_call_put_kernel, "PUT kernel ") \ @@ -168,11 +166,8 @@ EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \ EM(rxrpc_call_put_send_ack, "PUT send-ack") \ EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \ - EM(rxrpc_call_put_timer, "PUT timer ") \ - EM(rxrpc_call_put_timer_already, "PUT timer-al") \ EM(rxrpc_call_put_unnotify, "PUT unnotify") \ EM(rxrpc_call_put_userid_exists, "PUT u-exists") \ - EM(rxrpc_call_put_work, "PUT work ") \ EM(rxrpc_call_queue_abort, "QUE abort ") \ EM(rxrpc_call_queue_requeue, "QUE requeue ") \ EM(rxrpc_call_queue_resend, "QUE resend ") \ @@ -368,6 +363,7 @@ EM(rxrpc_txbuf_put_rotated, "PUT ROTATED") \ EM(rxrpc_txbuf_put_send_aborted, "PUT SEND-X ") \ EM(rxrpc_txbuf_put_trans, "PUT TRANS ") \ + EM(rxrpc_txbuf_see_out_of_step, "OUT-OF-STEP") \ EM(rxrpc_txbuf_see_send_more, "SEE SEND+ ") \ E_(rxrpc_txbuf_see_unacked, "SEE UNACKED") diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index c588c0e81f63..03523a864c11 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -598,6 +598,7 @@ struct rxrpc_call { u32 next_req_timo; /* Timeout for next Rx request packet (jif) */ struct timer_list timer; /* Combined event timer */ struct work_struct processor; /* Event processor */ + struct work_struct destroyer; /* In-process-context destroyer */ rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ struct list_head link; /* link in master call list */ struct list_head chan_wait_link; /* Link in conn->bundle->waiting_calls */ @@ -827,8 +828,6 @@ void rxrpc_reduce_call_timer(struct rxrpc_call *call, unsigned long now, enum rxrpc_timer_trace why); -void rxrpc_delete_call_timer(struct rxrpc_call *call); - /* * call_object.c */ @@ -847,8 +846,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *, struct sk_buff *); void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *); void rxrpc_release_calls_on_socket(struct rxrpc_sock *); -bool __rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace); -bool rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace); +void rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_see_call(struct rxrpc_call *, enum rxrpc_call_trace); bool rxrpc_try_get_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace); diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 29ca02e53c47..049b92b1c040 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -323,8 +323,8 @@ void rxrpc_process_call(struct work_struct *work) rxrpc_shrink_call_tx_buffer(call); if (call->state == RXRPC_CALL_COMPLETE) { - rxrpc_delete_call_timer(call); - goto out_put; + del_timer_sync(&call->timer); + goto out; } /* Work out if any timeouts tripped */ @@ -432,16 +432,15 @@ void rxrpc_process_call(struct work_struct *work) rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart); /* other events may have been raised since we started checking */ - if (call->events && call->state < RXRPC_CALL_COMPLETE) + if (call->events) goto requeue; -out_put: - rxrpc_put_call(call, rxrpc_call_put_work); out: _leave(""); return; requeue: - __rxrpc_queue_call(call, rxrpc_call_queue_requeue); + if (call->state < RXRPC_CALL_COMPLETE) + rxrpc_queue_call(call, rxrpc_call_queue_requeue); goto out; } diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 815209673115..9cd7e0190ef4 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -53,9 +53,7 @@ static void rxrpc_call_timer_expired(struct timer_list *t) if (call->state < RXRPC_CALL_COMPLETE) { trace_rxrpc_timer_expired(call, jiffies); - __rxrpc_queue_call(call, rxrpc_call_queue_timer); - } else { - rxrpc_put_call(call, rxrpc_call_put_already_queued); + rxrpc_queue_call(call, rxrpc_call_queue_timer); } } @@ -64,21 +62,14 @@ void rxrpc_reduce_call_timer(struct rxrpc_call *call, unsigned long now, enum rxrpc_timer_trace why) { - if (rxrpc_try_get_call(call, rxrpc_call_get_timer)) { - trace_rxrpc_timer(call, why, now); - if (timer_reduce(&call->timer, expire_at)) - rxrpc_put_call(call, rxrpc_call_put_timer_already); - } -} - -void rxrpc_delete_call_timer(struct rxrpc_call *call) -{ - if (del_timer_sync(&call->timer)) - rxrpc_put_call(call, rxrpc_call_put_timer); + trace_rxrpc_timer(call, why, now); + timer_reduce(&call->timer, expire_at); } static struct lock_class_key rxrpc_call_user_mutex_lock_class_key; +static void rxrpc_destroy_call(struct work_struct *); + /* * find an extant server call * - called in process context with IRQs enabled @@ -139,7 +130,8 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, &rxrpc_call_user_mutex_lock_class_key); timer_setup(&call->timer, rxrpc_call_timer_expired, 0); - INIT_WORK(&call->processor, &rxrpc_process_call); + INIT_WORK(&call->processor, rxrpc_process_call); + INIT_WORK(&call->destroyer, rxrpc_destroy_call); INIT_LIST_HEAD(&call->link); INIT_LIST_HEAD(&call->chan_wait_link); INIT_LIST_HEAD(&call->accept_link); @@ -423,34 +415,12 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, } /* - * Queue a call's work processor, getting a ref to pass to the work queue. + * Queue a call's work processor. */ -bool rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why) +void rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why) { - int n; - - if (!__refcount_inc_not_zero(&call->ref, &n)) - return false; if (rxrpc_queue_work(&call->processor)) - trace_rxrpc_call(call->debug_id, n + 1, 0, why); - else - rxrpc_put_call(call, rxrpc_call_put_already_queued); - return true; -} - -/* - * Queue a call's work processor, passing the callers ref to the work queue. - */ -bool __rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why) -{ - int n = refcount_read(&call->ref); - - ASSERTCMP(n, >=, 1); - if (rxrpc_queue_work(&call->processor)) - trace_rxrpc_call(call->debug_id, n, 0, why); - else - rxrpc_put_call(call, rxrpc_call_put_already_queued); - return true; + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), 0, why); } /* @@ -514,7 +484,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) BUG(); rxrpc_put_call_slot(call); - rxrpc_delete_call_timer(call); + del_timer_sync(&call->timer); /* Make sure we don't get any more notifications */ write_lock_bh(&rx->recvmsg_lock); @@ -612,36 +582,41 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace why) } /* - * Final call destruction - but must be done in process context. + * Free up the call under RCU. */ -static void rxrpc_destroy_call(struct work_struct *work) +static void rxrpc_rcu_free_call(struct rcu_head *rcu) { - struct rxrpc_call *call = container_of(work, struct rxrpc_call, processor); - struct rxrpc_net *rxnet = call->rxnet; - - rxrpc_delete_call_timer(call); + struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu); + struct rxrpc_net *rxnet = READ_ONCE(call->rxnet); - rxrpc_put_connection(call->conn, rxrpc_conn_put_call); - rxrpc_put_peer(call->peer, rxrpc_peer_put_call); kmem_cache_free(rxrpc_call_jar, call); if (atomic_dec_and_test(&rxnet->nr_calls)) wake_up_var(&rxnet->nr_calls); } /* - * Final call destruction under RCU. + * Final call destruction - but must be done in process context. */ -static void rxrpc_rcu_destroy_call(struct rcu_head *rcu) +static void rxrpc_destroy_call(struct work_struct *work) { - struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu); + struct rxrpc_call *call = container_of(work, struct rxrpc_call, destroyer); + struct rxrpc_txbuf *txb; - if (in_softirq()) { - INIT_WORK(&call->processor, rxrpc_destroy_call); - if (!rxrpc_queue_work(&call->processor)) - BUG(); - } else { - rxrpc_destroy_call(&call->processor); + del_timer_sync(&call->timer); + cancel_work_sync(&call->processor); /* The processor may restart the timer */ + del_timer_sync(&call->timer); + + rxrpc_cleanup_ring(call); + while ((txb = list_first_entry_or_null(&call->tx_buffer, + struct rxrpc_txbuf, call_link))) { + list_del(&txb->call_link); + rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned); } + rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned); + rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_put_ack); + rxrpc_put_connection(call->conn, rxrpc_conn_put_call); + rxrpc_put_peer(call->peer, rxrpc_peer_put_call); + call_rcu(&call->rcu, rxrpc_rcu_free_call); } /* @@ -649,23 +624,21 @@ static void rxrpc_rcu_destroy_call(struct rcu_head *rcu) */ void rxrpc_cleanup_call(struct rxrpc_call *call) { - struct rxrpc_txbuf *txb; - memset(&call->sock_node, 0xcd, sizeof(call->sock_node)); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags)); - rxrpc_cleanup_ring(call); - while ((txb = list_first_entry_or_null(&call->tx_buffer, - struct rxrpc_txbuf, call_link))) { - list_del(&txb->call_link); - rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned); - } - rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned); - rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_put_ack); + del_timer_sync(&call->timer); + cancel_work(&call->processor); - call_rcu(&call->rcu, rxrpc_rcu_destroy_call); + if (in_softirq() || work_busy(&call->processor)) + /* Can't use the rxrpc workqueue as we need to cancel/flush + * something that may be running/waiting there. + */ + schedule_work(&call->destroyer); + else + rxrpc_destroy_call(&call->destroyer); } /* diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index 96bfee89927b..f93dc666a3a0 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -120,6 +120,8 @@ void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *call) if (before(hard_ack, txb->seq)) break; + if (txb->seq != call->tx_bottom + 1) + rxrpc_see_txbuf(txb, rxrpc_txbuf_see_out_of_step); ASSERTCMP(txb->seq, ==, call->tx_bottom + 1); call->tx_bottom++; list_del_rcu(&txb->call_link); From patchwork Wed Nov 30 16:56:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27889 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1039618wrr; Wed, 30 Nov 2022 09:01:25 -0800 (PST) X-Google-Smtp-Source: AA0mqf7G4euwyAfIe01TGrkfslZjqP9Zz9L8l1MSJxr+9x76Y+CN2y0SAfHOzk3MxelYHx2s2CcL X-Received: by 2002:a17:906:8493:b0:7be:a769:2f41 with SMTP id m19-20020a170906849300b007bea7692f41mr15411610ejx.690.1669827685429; Wed, 30 Nov 2022 09:01:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827685; cv=none; d=google.com; s=arc-20160816; b=cjOVhPmqCN9F6UpwPaWhcYh6EHt5hbO/V9H6NqXjYu/2lXbCLAIGh9Hft7663vT5FV S748lACVdVk8UxD+Zc2LpiKrvrUEh/JHc+2ccuSZ/6NrTxhnRJuuGIlINsS3fwYyji6e UcrJrgs6He0JFYx/ueka3+uw8CNe8gprse32vZHhcRY4yef1rrxCUPG6b+Oisk4pL2Px gGlJwxtsPyeG01DetCjLuoLu9ABF9SLqtcsbcfhzv8SP8Y5sbIMHxyMRRSZ/3fTkj6fG nzNK3FbthJYHvnaHmp+ChcFqjLx0NlOcB86N1BR/YHSsNFcOiwvvR1rvRiTa1bFwCHpz wGiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=a3YmpGDKcnBEnpAHkJlNK2byBHIgeHPDnNurcx2Ouqo=; b=op7Ic1tbTB7ZmAtk1nkNNyvtj8onWbEzSH/Hx5HXE2HzD+ZeRashVE9vQnWWv1H23p yB5pCzSeaxeN2VP6mWqIZVDp5XCeyGS8S3OkqXHMYdhZtRM7PpgL2hbtl7lP4ouO3AKr 9H0tYlGBfmOZHzJos0kmHXQ2vGKc7rNJW8x37AJAksICqKapA7FyCxI5GJunGpK4XI3U 5vVgIl+oOOAAQOhOZtGb8UYcHm355sXALvQF9YdZUfFU/3iIpMIkchvosU/976tARRks kr7jQSEq/Trtl5qysMFkGXEr0Xu6He5sudtOD+xXKCdOjbQZ3Q0Nn8iX8PAdOeqNb1ko EgTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aUovKjKc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id di7-20020a170906730700b007add5af39f6si1677830ejc.929.2022.11.30.09.01.00; Wed, 30 Nov 2022 09:01:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aUovKjKc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229779AbiK3Q72 (ORCPT + 99 others); Wed, 30 Nov 2022 11:59:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229689AbiK3Q6f (ORCPT ); Wed, 30 Nov 2022 11:58:35 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B9668DFEF for ; Wed, 30 Nov 2022 08:56:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827405; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a3YmpGDKcnBEnpAHkJlNK2byBHIgeHPDnNurcx2Ouqo=; b=aUovKjKcIiOALDhadk64bwwnTzyP7bh1i8fLLYIHG+mlULptisLL2E373TMxuMDoKoltEa 8IoLx4QFv1qztLjDZCHShv6EK7IUrD/OCp49O+iUvHPqe29DjAQ9F6FQ+Sj1bj0mjAfsCi 9Mxf8EftsexvCodv6HGqjurmvwKX8EI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-456-_ZyxTHPUPQmHds55cGh_SA-1; Wed, 30 Nov 2022 11:56:40 -0500 X-MC-Unique: _ZyxTHPUPQmHds55cGh_SA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 70ED33C10149; Wed, 30 Nov 2022 16:56:40 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 973EB40C83D9; Wed, 30 Nov 2022 16:56:39 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 16/35] rxrpc: Don't hold a ref for connection workqueue From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:36 +0000 Message-ID: <166982739680.621383.8533102684905128534.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941235089063445?= X-GMAIL-MSGID: =?utf-8?q?1750941235089063445?= Currently, rxrpc gives the connection's work item a ref on the connection when it queues it - and this is called from the timer expiration function. The problem comes when queue_work() fails (ie. the work item is already queued): the timer routine must put the ref - but this may cause the cleanup code to run. This has the unfortunate effect that the cleanup code may then be run in softirq context - which means that any spinlocks it might need to touch have to be guarded to disable softirqs (ie. they need a "_bh" suffix). (1) Don't give a ref to the work item. (2) Simplify handling of service connections by adding a separate active count so that the refcount isn't also used for this. (3) Connection destruction for both client and service connections can then be cleaned up by putting rxrpc_put_connection() out of line and making a tidy progression through the destruction code (offloaded to a workqueue if put from softirq or processor function context). The RCU part of the cleanup then only deals with the freeing at the end. (4) Make rxrpc_queue_conn() return immediately if it sees the active count is -1 rather then queuing the connection. (5) Make sure that the cleanup routine waits for the work item to complete. (6) Stash the rxrpc_net pointer in the conn struct so that the rcu free routine can use it, even if the local endpoint has been freed. Unfortunately, neither the timer nor the work item can simply get around the problem by just using refcount_inc_not_zero() as the waits would still have to be done, and there would still be the possibility of having to put the ref in the expiration function. Note the connection work item is mostly going to go away with the main event work being transferred to the I/O thread, so the wait in (6) will become obsolete. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 11 +-- net/rxrpc/ar-internal.h | 25 ++---- net/rxrpc/call_accept.c | 1 net/rxrpc/conn_client.c | 31 ++------ net/rxrpc/conn_event.c | 4 - net/rxrpc/conn_object.c | 169 +++++++++++++++++++++++------------------- net/rxrpc/conn_service.c | 4 + net/rxrpc/net_ns.c | 2 net/rxrpc/proc.c | 5 + 9 files changed, 123 insertions(+), 129 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 4538de0079a5..44a9be9836f9 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -112,7 +112,6 @@ EM(rxrpc_conn_get_service_conn, "GET svc-conn") \ EM(rxrpc_conn_new_client, "NEW client ") \ EM(rxrpc_conn_new_service, "NEW service ") \ - EM(rxrpc_conn_put_already_queued, "PUT alreadyq") \ EM(rxrpc_conn_put_call, "PUT call ") \ EM(rxrpc_conn_put_call_input, "PUT inp-call") \ EM(rxrpc_conn_put_conn_input, "PUT inp-conn") \ @@ -121,13 +120,13 @@ EM(rxrpc_conn_put_local_dead, "PUT loc-dead") \ EM(rxrpc_conn_put_noreuse, "PUT noreuse ") \ EM(rxrpc_conn_put_poke, "PUT poke ") \ + EM(rxrpc_conn_put_service_reaped, "PUT svc-reap") \ EM(rxrpc_conn_put_unbundle, "PUT unbundle") \ EM(rxrpc_conn_put_unidle, "PUT unidle ") \ - EM(rxrpc_conn_put_work, "PUT work ") \ - EM(rxrpc_conn_queue_challenge, "GQ chall ") \ - EM(rxrpc_conn_queue_retry_work, "GQ retry-wk") \ - EM(rxrpc_conn_queue_rx_work, "GQ rx-work ") \ - EM(rxrpc_conn_queue_timer, "GQ timer ") \ + EM(rxrpc_conn_queue_challenge, "QUE chall ") \ + EM(rxrpc_conn_queue_retry_work, "QUE retry-wk") \ + EM(rxrpc_conn_queue_rx_work, "QUE rx-work ") \ + EM(rxrpc_conn_queue_timer, "QUE timer ") \ EM(rxrpc_conn_see_new_service_conn, "SEE new-svc ") \ EM(rxrpc_conn_see_reap_service, "SEE reap-svc") \ E_(rxrpc_conn_see_work, "SEE work ") diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 03523a864c11..41a57c145f2b 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -76,7 +76,7 @@ struct rxrpc_net { bool kill_all_client_conns; atomic_t nr_client_conns; spinlock_t client_conn_cache_lock; /* Lock for ->*_client_conns */ - spinlock_t client_conn_discard_lock; /* Prevent multiple discarders */ + struct mutex client_conn_discard_lock; /* Prevent multiple discarders */ struct list_head idle_client_conns; struct work_struct client_conn_reaper; struct timer_list client_conn_reap_timer; @@ -432,9 +432,11 @@ struct rxrpc_connection { struct rxrpc_conn_proto proto; struct rxrpc_local *local; /* Representation of local endpoint */ struct rxrpc_peer *peer; /* Remote endpoint */ + struct rxrpc_net *rxnet; /* Network namespace to which call belongs */ struct key *key; /* Security details */ refcount_t ref; + atomic_t active; /* Active count for service conns */ struct rcu_head rcu; struct list_head cache_link; @@ -455,6 +457,7 @@ struct rxrpc_connection { struct timer_list timer; /* Conn event timer */ struct work_struct processor; /* connection event processor */ + struct work_struct destructor; /* In-process-context destroyer */ struct rxrpc_bundle *bundle; /* Client connection bundle */ struct rb_node service_node; /* Node in peer->service_conns */ struct list_head proc_link; /* link in procfs list */ @@ -897,20 +900,20 @@ void rxrpc_process_delayed_final_acks(struct rxrpc_connection *, bool); extern unsigned int rxrpc_connection_expiry; extern unsigned int rxrpc_closed_conn_expiry; -struct rxrpc_connection *rxrpc_alloc_connection(gfp_t); +struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *, gfp_t); struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *, struct sk_buff *, struct rxrpc_peer **); void __rxrpc_disconnect_call(struct rxrpc_connection *, struct rxrpc_call *); void rxrpc_disconnect_call(struct rxrpc_call *); -void rxrpc_kill_connection(struct rxrpc_connection *); -bool rxrpc_queue_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); +void rxrpc_kill_client_conn(struct rxrpc_connection *); +void rxrpc_queue_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); void rxrpc_see_connection(struct rxrpc_connection *, enum rxrpc_conn_trace); struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *, enum rxrpc_conn_trace); struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *, enum rxrpc_conn_trace); -void rxrpc_put_service_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); +void rxrpc_put_connection(struct rxrpc_connection *, enum rxrpc_conn_trace); void rxrpc_service_connection_reaper(struct work_struct *); void rxrpc_destroy_all_connections(struct rxrpc_net *); @@ -924,18 +927,6 @@ static inline bool rxrpc_conn_is_service(const struct rxrpc_connection *conn) return !rxrpc_conn_is_client(conn); } -static inline void rxrpc_put_connection(struct rxrpc_connection *conn, - enum rxrpc_conn_trace why) -{ - if (!conn) - return; - - if (rxrpc_conn_is_client(conn)) - rxrpc_put_client_conn(conn, why); - else - rxrpc_put_service_conn(conn, why); -} - static inline void rxrpc_reduce_conn_timer(struct rxrpc_connection *conn, unsigned long expire_at) { diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index dd4ca4bee77f..8d106b626aa3 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -308,6 +308,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, rxrpc_new_incoming_connection(rx, conn, sec, skb); } else { rxrpc_get_connection(conn, rxrpc_conn_get_service_conn); + atomic_inc(&conn->active); } /* And now we can allocate and set up a new call */ diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 34ff6fa85c32..9485a3d18f29 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -51,7 +51,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle); static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, gfp_t gfp) { - struct rxrpc_net *rxnet = conn->local->rxnet; + struct rxrpc_net *rxnet = conn->rxnet; int id; _enter(""); @@ -179,7 +179,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) _enter(""); - conn = rxrpc_alloc_connection(gfp); + conn = rxrpc_alloc_connection(rxnet, gfp); if (!conn) { _leave(" = -ENOMEM"); return ERR_PTR(-ENOMEM); @@ -243,7 +243,7 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn) if (!conn) goto dont_reuse; - rxnet = conn->local->rxnet; + rxnet = conn->rxnet; if (test_bit(RXRPC_CONN_DONT_REUSE, &conn->flags)) goto dont_reuse; @@ -970,7 +970,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) /* * Clean up a dead client connection. */ -static void rxrpc_kill_client_conn(struct rxrpc_connection *conn) +void rxrpc_kill_client_conn(struct rxrpc_connection *conn) { struct rxrpc_local *local = conn->local; struct rxrpc_net *rxnet = local->rxnet; @@ -981,23 +981,6 @@ static void rxrpc_kill_client_conn(struct rxrpc_connection *conn) atomic_dec(&rxnet->nr_client_conns); rxrpc_put_client_connection_id(conn); - rxrpc_kill_connection(conn); -} - -/* - * Clean up a dead client connections. - */ -void rxrpc_put_client_conn(struct rxrpc_connection *conn, - enum rxrpc_conn_trace why) -{ - unsigned int debug_id = conn->debug_id; - bool dead; - int r; - - dead = __refcount_dec_and_test(&conn->ref, &r); - trace_rxrpc_conn(debug_id, r - 1, why); - if (dead) - rxrpc_kill_client_conn(conn); } /* @@ -1023,7 +1006,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) } /* Don't double up on the discarding */ - if (!spin_trylock(&rxnet->client_conn_discard_lock)) { + if (!mutex_trylock(&rxnet->client_conn_discard_lock)) { _leave(" [already]"); return; } @@ -1061,6 +1044,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) goto not_yet_expired; } + atomic_dec(&conn->active); trace_rxrpc_client(conn, -1, rxrpc_client_discard); list_del_init(&conn->cache_link); @@ -1087,7 +1071,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) out: spin_unlock(&rxnet->client_conn_cache_lock); - spin_unlock(&rxnet->client_conn_discard_lock); + mutex_unlock(&rxnet->client_conn_discard_lock); _leave(""); } @@ -1127,6 +1111,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local) list_for_each_entry_safe(conn, tmp, &rxnet->idle_client_conns, cache_link) { if (conn->local == local) { + atomic_dec(&conn->active); trace_rxrpc_client(conn, -1, rxrpc_client_discard); list_move(&conn->cache_link, &graveyard); } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 49d885f73fa5..23a74e35052d 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -478,8 +478,4 @@ void rxrpc_process_connection(struct work_struct *work) rxrpc_do_process_connection(conn); rxrpc_unuse_local(conn->local, rxrpc_local_unuse_conn_work); } - - rxrpc_put_connection(conn, rxrpc_conn_put_work); - _leave(""); - return; } diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index f7c271a740ed..c2e05ea29f12 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -19,7 +19,9 @@ unsigned int __read_mostly rxrpc_connection_expiry = 10 * 60; unsigned int __read_mostly rxrpc_closed_conn_expiry = 10; -static void rxrpc_destroy_connection(struct rcu_head *); +static void rxrpc_clean_up_connection(struct work_struct *work); +static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet, + unsigned long reap_at); static void rxrpc_connection_timer(struct timer_list *timer) { @@ -32,7 +34,8 @@ static void rxrpc_connection_timer(struct timer_list *timer) /* * allocate a new connection */ -struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) +struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet, + gfp_t gfp) { struct rxrpc_connection *conn; @@ -42,10 +45,12 @@ struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) if (conn) { INIT_LIST_HEAD(&conn->cache_link); timer_setup(&conn->timer, &rxrpc_connection_timer, 0); - INIT_WORK(&conn->processor, &rxrpc_process_connection); + INIT_WORK(&conn->processor, rxrpc_process_connection); + INIT_WORK(&conn->destructor, rxrpc_clean_up_connection); INIT_LIST_HEAD(&conn->proc_link); INIT_LIST_HEAD(&conn->link); skb_queue_head_init(&conn->rx_queue); + conn->rxnet = rxnet; conn->security = &rxrpc_no_security; spin_lock_init(&conn->state_lock); conn->debug_id = atomic_inc_return(&rxrpc_debug_id); @@ -224,53 +229,20 @@ void rxrpc_disconnect_call(struct rxrpc_call *call) set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); conn->idle_timestamp = jiffies; -} - -/* - * Kill off a connection. - */ -void rxrpc_kill_connection(struct rxrpc_connection *conn) -{ - struct rxrpc_net *rxnet = conn->local->rxnet; - - ASSERT(!rcu_access_pointer(conn->channels[0].call) && - !rcu_access_pointer(conn->channels[1].call) && - !rcu_access_pointer(conn->channels[2].call) && - !rcu_access_pointer(conn->channels[3].call)); - ASSERT(list_empty(&conn->cache_link)); - - write_lock(&rxnet->conn_lock); - list_del_init(&conn->proc_link); - write_unlock(&rxnet->conn_lock); - - /* Drain the Rx queue. Note that even though we've unpublished, an - * incoming packet could still be being added to our Rx queue, so we - * will need to drain it again in the RCU cleanup handler. - */ - rxrpc_purge_queue(&conn->rx_queue); - - /* Leave final destruction to RCU. The connection processor work item - * must carry a ref on the connection to prevent us getting here whilst - * it is queued or running. - */ - call_rcu(&conn->rcu, rxrpc_destroy_connection); + if (atomic_dec_and_test(&conn->active)) + rxrpc_set_service_reap_timer(conn->rxnet, + jiffies + rxrpc_connection_expiry); } /* * Queue a connection's work processor, getting a ref to pass to the work * queue. */ -bool rxrpc_queue_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why) +void rxrpc_queue_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why) { - int r; - - if (!__refcount_inc_not_zero(&conn->ref, &r)) - return false; - if (rxrpc_queue_work(&conn->processor)) - trace_rxrpc_conn(conn->debug_id, why, r + 1); - else - rxrpc_put_connection(conn, rxrpc_conn_put_already_queued); - return true; + if (atomic_read(&conn->active) >= 0 && + rxrpc_queue_work(&conn->processor)) + rxrpc_see_connection(conn, why); } /* @@ -327,51 +299,96 @@ static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet, timer_reduce(&rxnet->service_conn_reap_timer, reap_at); } -/* - * Release a service connection - */ -void rxrpc_put_service_conn(struct rxrpc_connection *conn, - enum rxrpc_conn_trace why) -{ - unsigned int debug_id = conn->debug_id; - int r; - - __refcount_dec(&conn->ref, &r); - trace_rxrpc_conn(debug_id, r - 1, why); - if (r - 1 == 1) - rxrpc_set_service_reap_timer(conn->local->rxnet, - jiffies + rxrpc_connection_expiry); -} - /* * destroy a virtual connection */ -static void rxrpc_destroy_connection(struct rcu_head *rcu) +static void rxrpc_rcu_free_connection(struct rcu_head *rcu) { struct rxrpc_connection *conn = container_of(rcu, struct rxrpc_connection, rcu); + struct rxrpc_net *rxnet = conn->rxnet; _enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref)); trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref), rxrpc_conn_free); + kfree(conn); - ASSERTCMP(refcount_read(&conn->ref), ==, 0); + if (atomic_dec_and_test(&rxnet->nr_conns)) + wake_up_var(&rxnet->nr_conns); +} + +/* + * Clean up a dead connection. + */ +static void rxrpc_clean_up_connection(struct work_struct *work) +{ + struct rxrpc_connection *conn = + container_of(work, struct rxrpc_connection, destructor); + struct rxrpc_net *rxnet = conn->rxnet; + + ASSERT(!rcu_access_pointer(conn->channels[0].call) && + !rcu_access_pointer(conn->channels[1].call) && + !rcu_access_pointer(conn->channels[2].call) && + !rcu_access_pointer(conn->channels[3].call)); + ASSERT(list_empty(&conn->cache_link)); del_timer_sync(&conn->timer); + cancel_work_sync(&conn->processor); /* Processing may restart the timer */ + del_timer_sync(&conn->timer); + + write_lock(&rxnet->conn_lock); + list_del_init(&conn->proc_link); + write_unlock(&rxnet->conn_lock); + rxrpc_purge_queue(&conn->rx_queue); + rxrpc_kill_client_conn(conn); + conn->security->clear(conn); key_put(conn->key); rxrpc_put_bundle(conn->bundle, rxrpc_bundle_put_conn); rxrpc_put_peer(conn->peer, rxrpc_peer_put_conn); - - if (atomic_dec_and_test(&conn->local->rxnet->nr_conns)) - wake_up_var(&conn->local->rxnet->nr_conns); rxrpc_put_local(conn->local, rxrpc_local_put_kill_conn); - kfree(conn); - _leave(""); + /* Drain the Rx queue. Note that even though we've unpublished, an + * incoming packet could still be being added to our Rx queue, so we + * will need to drain it again in the RCU cleanup handler. + */ + rxrpc_purge_queue(&conn->rx_queue); + + call_rcu(&conn->rcu, rxrpc_rcu_free_connection); +} + +/* + * Drop a ref on a connection. + */ +void rxrpc_put_connection(struct rxrpc_connection *conn, + enum rxrpc_conn_trace why) +{ + unsigned int debug_id; + bool dead; + int r; + + if (!conn) + return; + + debug_id = conn->debug_id; + dead = __refcount_dec_and_test(&conn->ref, &r); + trace_rxrpc_conn(debug_id, r - 1, why); + if (dead) { + del_timer(&conn->timer); + cancel_work(&conn->processor); + + if (in_softirq() || work_busy(&conn->processor) || + timer_pending(&conn->timer)) + /* Can't use the rxrpc workqueue as we need to cancel/flush + * something that may be running/waiting there. + */ + schedule_work(&conn->destructor); + else + rxrpc_clean_up_connection(&conn->destructor); + } } /* @@ -383,6 +400,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work) struct rxrpc_net *rxnet = container_of(work, struct rxrpc_net, service_conn_reaper); unsigned long expire_at, earliest, idle_timestamp, now; + int active; LIST_HEAD(graveyard); @@ -393,8 +411,8 @@ void rxrpc_service_connection_reaper(struct work_struct *work) write_lock(&rxnet->conn_lock); list_for_each_entry_safe(conn, _p, &rxnet->service_conns, link) { - ASSERTCMP(refcount_read(&conn->ref), >, 0); - if (likely(refcount_read(&conn->ref) > 1)) + ASSERTCMP(atomic_read(&conn->active), >=, 0); + if (likely(atomic_read(&conn->active) > 0)) continue; if (conn->state == RXRPC_CONN_SERVICE_PREALLOC) continue; @@ -405,8 +423,8 @@ void rxrpc_service_connection_reaper(struct work_struct *work) if (conn->local->service_closed) expire_at = idle_timestamp + rxrpc_closed_conn_expiry * HZ; - _debug("reap CONN %d { u=%d,t=%ld }", - conn->debug_id, refcount_read(&conn->ref), + _debug("reap CONN %d { a=%d,t=%ld }", + conn->debug_id, atomic_read(&conn->active), (long)expire_at - (long)now); if (time_before(now, expire_at)) { @@ -416,10 +434,11 @@ void rxrpc_service_connection_reaper(struct work_struct *work) } } - /* The usage count sits at 1 whilst the object is unused on the - * list; we reduce that to 0 to make the object unavailable. + /* The activity count sits at 0 whilst the conn is unused on + * the list; we reduce that to -1 to make the conn unavailable. */ - if (!refcount_dec_if_one(&conn->ref)) + active = 0; + if (!atomic_try_cmpxchg(&conn->active, &active, -1)) continue; rxrpc_see_connection(conn, rxrpc_conn_see_reap_service); @@ -443,8 +462,8 @@ void rxrpc_service_connection_reaper(struct work_struct *work) link); list_del_init(&conn->link); - ASSERTCMP(refcount_read(&conn->ref), ==, 0); - rxrpc_kill_connection(conn); + ASSERTCMP(atomic_read(&conn->active), ==, -1); + rxrpc_put_connection(conn, rxrpc_conn_put_service_reaped); } _leave(""); diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index 2c44d67b43dc..b5ae7c753fc3 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -125,7 +125,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer, struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxnet, gfp_t gfp) { - struct rxrpc_connection *conn = rxrpc_alloc_connection(gfp); + struct rxrpc_connection *conn = rxrpc_alloc_connection(rxnet, gfp); if (conn) { /* We maintain an extra ref on the connection whilst it is on @@ -181,6 +181,8 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, conn->service_id == rx->service_upgrade.from) conn->service_id = rx->service_upgrade.to; + atomic_set(&conn->active, 1); + /* Make the connection a target for incoming packets. */ rxrpc_publish_service_conn(conn->peer, conn); } diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c index 84242c0e467c..5905530e2f33 100644 --- a/net/rxrpc/net_ns.c +++ b/net/rxrpc/net_ns.c @@ -65,7 +65,7 @@ static __net_init int rxrpc_init_net(struct net *net) atomic_set(&rxnet->nr_client_conns, 0); rxnet->kill_all_client_conns = false; spin_lock_init(&rxnet->client_conn_cache_lock); - spin_lock_init(&rxnet->client_conn_discard_lock); + mutex_init(&rxnet->client_conn_discard_lock); INIT_LIST_HEAD(&rxnet->idle_client_conns); INIT_WORK(&rxnet->client_conn_reaper, rxrpc_discard_expired_client_conns); diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index bb2edf6db896..d3a6d24cf871 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -159,7 +159,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) seq_puts(seq, "Proto Local " " Remote " - " SvID ConnID End Use State Key " + " SvID ConnID End Ref Act State Key " " Serial ISerial CallId0 CallId1 CallId2 CallId3\n" ); return 0; @@ -177,7 +177,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) sprintf(rbuff, "%pISpc", &conn->peer->srx.transport); print: seq_printf(seq, - "UDP %-47.47s %-47.47s %4x %08x %s %3u" + "UDP %-47.47s %-47.47s %4x %08x %s %3u %3d" " %s %08x %08x %08x %08x %08x %08x %08x\n", lbuff, rbuff, @@ -185,6 +185,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) conn->proto.cid, rxrpc_conn_is_service(conn) ? "Svc" : "Clt", refcount_read(&conn->ref), + atomic_read(&conn->active), rxrpc_conn_states[conn->state], key_serial(conn->key), atomic_read(&conn->serial), From patchwork Wed Nov 30 16:56:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27894 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1040624wrr; Wed, 30 Nov 2022 09:02:53 -0800 (PST) X-Google-Smtp-Source: AA0mqf5hYIxNHVxeuGnrihScT0WFfkMI7lGL1bfncGRbFSryA3SpQA84VElOqsdP6CMiKRCZSXtx X-Received: by 2002:a17:906:c259:b0:7b5:9670:ae0 with SMTP id bl25-20020a170906c25900b007b596700ae0mr38262763ejb.321.1669827773371; Wed, 30 Nov 2022 09:02:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827773; cv=none; d=google.com; s=arc-20160816; b=UXJ+wFeqa5BsqPRwQFTIKHRlDZtbwE9akBfGTpUYtMHFk2shtuof8UqjjG7Xv9qoo0 BH5b6EjuVVaN6r+sYeSLSErSbkv4DWlqjpXVEgtp6yx6BD77zn6unpqagBz997fVntt3 iEiZdYfamCzFdsyi4T8RbjbtM7hgId1qOM3xpEP+fYjCpkP5CJY76dkBhwZiyjR6oltv 0pbrd9z+caovp/22jpB9MFka9E1VMaQr/ZY34O9DUjvAXjDteDWv2dmAw6i/qtUHNZjY umXYREgGzuXLfUmoxrED5w0s6oXUP1A3uYeezc4oJSqVmuKiegKhk2CGSA+Jv9t5+qq2 ydtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=Da23v2ofVBsJvoOGg99bX5bNrH38dJvu85qEAwPJJD0=; b=z7EkA7dJYg836v2RUfcUu3/vr8ME5i1Iiryv84OIUvEPitHMnmr1GkTITuSYb8v4PB kccboBq5RJzab369+cNXsUdySTxS+hbX5I7z8oJbQ0ryu0E/NKe8wML8jcz6kWKRhmo8 N9kaq1sKQSKkV87VCkV8GZrPFPMMIWwN7wA08+JkH0Wp+0x5pvg8iMWBpowCyZp32eQ4 1XS3hfORQ9cSXqDHRf1AI9Gm0YZpUHoRPpaCYc2fEWcb0qKpfZySAU/cez7sfIjKuN6I oT+DRm31ZlLyiXaJSxbXv45yBQXbpE5PLe+8NPAr8CatAtMhSMPwzR9DnW8BvY6Yy37U 7mtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WxsSS7QO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m15-20020a1709062b8f00b007b9eeb48797si1320235ejg.696.2022.11.30.09.02.28; Wed, 30 Nov 2022 09:02:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WxsSS7QO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229670AbiK3Q7b (ORCPT + 99 others); Wed, 30 Nov 2022 11:59:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230431AbiK3Q6g (ORCPT ); Wed, 30 Nov 2022 11:58:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4ADBB92099 for ; Wed, 30 Nov 2022 08:56:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827411; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Da23v2ofVBsJvoOGg99bX5bNrH38dJvu85qEAwPJJD0=; b=WxsSS7QOcv9zUPYx14Hfsx/G6OqXREnJSTw6qC33XLsC+PLaUZmgjKTcZukPC2zuREhyCO hLzdqOX93lGPHGD38IQpfNAKcu+vd3KF+HSge7SOB84B5q3IbleK2iUe9UO+9ZR/pDYxxD QjcLw26SEz10H18OdFpYPPmWA+Xebn0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-596-Hs7kBDP9MqCm7F8ZdJ2DCg-1; Wed, 30 Nov 2022 11:56:49 -0500 X-MC-Unique: Hs7kBDP9MqCm7F8ZdJ2DCg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3E7D1185A79C; Wed, 30 Nov 2022 16:56:49 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 633B2470360; Wed, 30 Nov 2022 16:56:48 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 17/35] rxrpc: Split the receive code From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:45 +0000 Message-ID: <166982740561.621383.12448358095690926130.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941326827237619?= X-GMAIL-MSGID: =?utf-8?q?1750941326827237619?= Split the code that handles packet reception in softirq mode as a prelude to moving all the packet processing beyond routing to the appropriate call and setting up of a new call out into process context. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/Makefile | 1 net/rxrpc/ar-internal.h | 7 + net/rxrpc/input.c | 372 +---------------------------------------------- net/rxrpc/io_thread.c | 370 +++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 384 insertions(+), 366 deletions(-) create mode 100644 net/rxrpc/io_thread.c diff --git a/net/rxrpc/Makefile b/net/rxrpc/Makefile index 79687477d93c..e76d3459d78e 100644 --- a/net/rxrpc/Makefile +++ b/net/rxrpc/Makefile @@ -16,6 +16,7 @@ rxrpc-y := \ conn_service.o \ input.o \ insecure.o \ + io_thread.o \ key.o \ local_event.o \ local_object.o \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 41a57c145f2b..523cc9c5ab12 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -946,6 +946,13 @@ void rxrpc_unpublish_service_conn(struct rxrpc_connection *); /* * input.c */ +void rxrpc_input_call_packet(struct rxrpc_call *, struct sk_buff *); +void rxrpc_input_implicit_end_call(struct rxrpc_sock *, struct rxrpc_connection *, + struct rxrpc_call *); + +/* + * io_thread.c + */ int rxrpc_input_packet(struct sock *, struct sk_buff *); /* diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index ab8b7a1be935..f4f6f3c62d03 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-or-later -/* RxRPC packet reception +/* Processing of received RxRPC packets * - * Copyright (C) 2007, 2016 Red Hat, Inc. All Rights Reserved. + * Copyright (C) 2020 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) */ @@ -1029,7 +1029,7 @@ static void rxrpc_input_abort(struct rxrpc_call *call, struct sk_buff *skb) /* * Process an incoming call packet. */ -static void rxrpc_input_call_packet(struct rxrpc_call *call, +void rxrpc_input_call_packet(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); @@ -1086,9 +1086,9 @@ static void rxrpc_input_call_packet(struct rxrpc_call *call, * * TODO: If callNumber > call_id + 1, renegotiate security. */ -static void rxrpc_input_implicit_end_call(struct rxrpc_sock *rx, - struct rxrpc_connection *conn, - struct rxrpc_call *call) +void rxrpc_input_implicit_end_call(struct rxrpc_sock *rx, + struct rxrpc_connection *conn, + struct rxrpc_call *call) { switch (READ_ONCE(call->state)) { case RXRPC_CALL_SERVER_AWAIT_ACK: @@ -1109,363 +1109,3 @@ static void rxrpc_input_implicit_end_call(struct rxrpc_sock *rx, __rxrpc_disconnect_call(conn, call); spin_unlock(&rx->incoming_lock); } - -/* - * post connection-level events to the connection - * - this includes challenges, responses, some aborts and call terminal packet - * retransmission. - */ -static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, - struct sk_buff *skb) -{ - _enter("%p,%p", conn, skb); - - skb_queue_tail(&conn->rx_queue, skb); - rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work); -} - -/* - * post endpoint-level events to the local endpoint - * - this includes debug and version messages - */ -static void rxrpc_post_packet_to_local(struct rxrpc_local *local, - struct sk_buff *skb) -{ - _enter("%p,%p", local, skb); - - if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { - skb_queue_tail(&local->event_queue, skb); - rxrpc_queue_local(local); - } else { - rxrpc_free_skb(skb, rxrpc_skb_put_input); - } -} - -/* - * put a packet up for transport-level abort - */ -static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) -{ - if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { - skb_queue_tail(&local->reject_queue, skb); - rxrpc_queue_local(local); - } else { - rxrpc_free_skb(skb, rxrpc_skb_put_input); - } -} - -/* - * Extract the wire header from a packet and translate the byte order. - */ -static noinline -int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb) -{ - struct rxrpc_wire_header whdr; - - /* dig out the RxRPC connection details */ - if (skb_copy_bits(skb, 0, &whdr, sizeof(whdr)) < 0) { - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, - tracepoint_string("bad_hdr")); - return -EBADMSG; - } - - memset(sp, 0, sizeof(*sp)); - sp->hdr.epoch = ntohl(whdr.epoch); - sp->hdr.cid = ntohl(whdr.cid); - sp->hdr.callNumber = ntohl(whdr.callNumber); - sp->hdr.seq = ntohl(whdr.seq); - sp->hdr.serial = ntohl(whdr.serial); - sp->hdr.flags = whdr.flags; - sp->hdr.type = whdr.type; - sp->hdr.userStatus = whdr.userStatus; - sp->hdr.securityIndex = whdr.securityIndex; - sp->hdr._rsvd = ntohs(whdr._rsvd); - sp->hdr.serviceId = ntohs(whdr.serviceId); - return 0; -} - -/* - * Extract the abort code from an ABORT packet and stash it in skb->priority. - */ -static bool rxrpc_extract_abort(struct sk_buff *skb) -{ - __be32 wtmp; - - if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), - &wtmp, sizeof(wtmp)) < 0) - return false; - skb->priority = ntohl(wtmp); - return true; -} - -/* - * handle data received on the local endpoint - * - may be called in interrupt context - * - * [!] Note that as this is called from the encap_rcv hook, the socket is not - * held locked by the caller and nothing prevents sk_user_data on the UDP from - * being cleared in the middle of processing this function. - * - * Called with the RCU read lock held from the IP layer via UDP. - */ -int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) -{ - struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk); - struct rxrpc_connection *conn; - struct rxrpc_channel *chan; - struct rxrpc_call *call = NULL; - struct rxrpc_skb_priv *sp; - struct rxrpc_peer *peer = NULL; - struct rxrpc_sock *rx = NULL; - unsigned int channel; - - _enter("%p", udp_sk); - - if (unlikely(!local)) { - kfree_skb(skb); - return 0; - } - if (skb->tstamp == 0) - skb->tstamp = ktime_get_real(); - - rxrpc_new_skb(skb, rxrpc_skb_new_encap_rcv); - - skb_pull(skb, sizeof(struct udphdr)); - - /* The UDP protocol already released all skb resources; - * we are free to add our own data there. - */ - sp = rxrpc_skb(skb); - - /* dig out the RxRPC connection details */ - if (rxrpc_extract_header(sp, skb) < 0) - goto bad_message; - - if (IS_ENABLED(CONFIG_AF_RXRPC_INJECT_LOSS)) { - static int lose; - if ((lose++ & 7) == 7) { - trace_rxrpc_rx_lose(sp); - rxrpc_free_skb(skb, rxrpc_skb_put_lose); - return 0; - } - } - - if (skb->tstamp == 0) - skb->tstamp = ktime_get_real(); - trace_rxrpc_rx_packet(sp); - - switch (sp->hdr.type) { - case RXRPC_PACKET_TYPE_VERSION: - if (rxrpc_to_client(sp)) - goto discard; - rxrpc_post_packet_to_local(local, skb); - goto out; - - case RXRPC_PACKET_TYPE_BUSY: - if (rxrpc_to_server(sp)) - goto discard; - fallthrough; - case RXRPC_PACKET_TYPE_ACK: - case RXRPC_PACKET_TYPE_ACKALL: - if (sp->hdr.callNumber == 0) - goto bad_message; - break; - case RXRPC_PACKET_TYPE_ABORT: - if (!rxrpc_extract_abort(skb)) - return true; /* Just discard if malformed */ - break; - - case RXRPC_PACKET_TYPE_DATA: - if (sp->hdr.callNumber == 0 || - sp->hdr.seq == 0) - goto bad_message; - - /* Unshare the packet so that it can be modified for in-place - * decryption. - */ - if (sp->hdr.securityIndex != 0) { - struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC); - if (!nskb) { - rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem); - goto out; - } - - if (nskb != skb) { - rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare); - skb = nskb; - rxrpc_new_skb(skb, rxrpc_skb_new_unshared); - sp = rxrpc_skb(skb); - } - } - break; - - case RXRPC_PACKET_TYPE_CHALLENGE: - if (rxrpc_to_server(sp)) - goto discard; - break; - case RXRPC_PACKET_TYPE_RESPONSE: - if (rxrpc_to_client(sp)) - goto discard; - break; - - /* Packet types 9-11 should just be ignored. */ - case RXRPC_PACKET_TYPE_PARAMS: - case RXRPC_PACKET_TYPE_10: - case RXRPC_PACKET_TYPE_11: - goto discard; - - default: - goto bad_message; - } - - if (sp->hdr.serviceId == 0) - goto bad_message; - - if (rxrpc_to_server(sp)) { - /* Weed out packets to services we're not offering. Packets - * that would begin a call are explicitly rejected and the rest - * are just discarded. - */ - rx = rcu_dereference(local->service); - if (!rx || (sp->hdr.serviceId != rx->srx.srx_service && - sp->hdr.serviceId != rx->second_service)) { - if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA && - sp->hdr.seq == 1) - goto unsupported_service; - goto discard; - } - } - - conn = rxrpc_find_connection_rcu(local, skb, &peer); - if (conn) { - if (sp->hdr.securityIndex != conn->security_ix) - goto wrong_security; - - if (sp->hdr.serviceId != conn->service_id) { - int old_id; - - if (!test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags)) - goto reupgrade; - old_id = cmpxchg(&conn->service_id, conn->orig_service_id, - sp->hdr.serviceId); - - if (old_id != conn->orig_service_id && - old_id != sp->hdr.serviceId) - goto reupgrade; - } - - if (sp->hdr.callNumber == 0) { - /* Connection-level packet */ - _debug("CONN %p {%d}", conn, conn->debug_id); - rxrpc_post_packet_to_conn(conn, skb); - goto out; - } - - if ((int)sp->hdr.serial - (int)conn->hi_serial > 0) - conn->hi_serial = sp->hdr.serial; - - /* Call-bound packets are routed by connection channel. */ - channel = sp->hdr.cid & RXRPC_CHANNELMASK; - chan = &conn->channels[channel]; - - /* Ignore really old calls */ - if (sp->hdr.callNumber < chan->last_call) - goto discard; - - if (sp->hdr.callNumber == chan->last_call) { - if (chan->call || - sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) - goto discard; - - /* For the previous service call, if completed - * successfully, we discard all further packets. - */ - if (rxrpc_conn_is_service(conn) && - chan->last_type == RXRPC_PACKET_TYPE_ACK) - goto discard; - - /* But otherwise we need to retransmit the final packet - * from data cached in the connection record. - */ - if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA) - trace_rxrpc_rx_data(chan->call_debug_id, - sp->hdr.seq, - sp->hdr.serial, - sp->hdr.flags); - rxrpc_post_packet_to_conn(conn, skb); - goto out; - } - - call = rcu_dereference(chan->call); - - if (sp->hdr.callNumber > chan->call_id) { - if (rxrpc_to_client(sp)) - goto reject_packet; - if (call) - rxrpc_input_implicit_end_call(rx, conn, call); - call = NULL; - } - - if (call) { - if (sp->hdr.serviceId != call->service_id) - call->service_id = sp->hdr.serviceId; - if ((int)sp->hdr.serial - (int)call->rx_serial > 0) - call->rx_serial = sp->hdr.serial; - if (!test_bit(RXRPC_CALL_RX_HEARD, &call->flags)) - set_bit(RXRPC_CALL_RX_HEARD, &call->flags); - } - } - - if (!call || refcount_read(&call->ref) == 0) { - if (rxrpc_to_client(sp) || - sp->hdr.type != RXRPC_PACKET_TYPE_DATA) - goto bad_message; - if (sp->hdr.seq != 1) - goto discard; - call = rxrpc_new_incoming_call(local, rx, skb); - if (!call) - goto reject_packet; - } - - /* Process a call packet; this either discards or passes on the ref - * elsewhere. - */ - rxrpc_input_call_packet(call, skb); - goto out; - -discard: - rxrpc_free_skb(skb, rxrpc_skb_put_input); -out: - trace_rxrpc_rx_done(0, 0); - return 0; - -wrong_security: - trace_rxrpc_abort(0, "SEC", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RXKADINCONSISTENCY, EBADMSG); - skb->priority = RXKADINCONSISTENCY; - goto post_abort; - -unsupported_service: - trace_rxrpc_abort(0, "INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_INVALID_OPERATION, EOPNOTSUPP); - skb->priority = RX_INVALID_OPERATION; - goto post_abort; - -reupgrade: - trace_rxrpc_abort(0, "UPG", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_PROTOCOL_ERROR, EBADMSG); - goto protocol_error; - -bad_message: - trace_rxrpc_abort(0, "BAD", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_PROTOCOL_ERROR, EBADMSG); -protocol_error: - skb->priority = RX_PROTOCOL_ERROR; -post_abort: - skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; -reject_packet: - trace_rxrpc_rx_done(skb->mark, skb->priority); - rxrpc_reject_packet(local, skb); - _leave(" [badmsg]"); - return 0; -} diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c new file mode 100644 index 000000000000..d2aaad5afa1d --- /dev/null +++ b/net/rxrpc/io_thread.c @@ -0,0 +1,370 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* RxRPC packet reception + * + * Copyright (C) 2007, 2016 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include "ar-internal.h" + +/* + * post connection-level events to the connection + * - this includes challenges, responses, some aborts and call terminal packet + * retransmission. + */ +static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, + struct sk_buff *skb) +{ + _enter("%p,%p", conn, skb); + + skb_queue_tail(&conn->rx_queue, skb); + rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work); +} + +/* + * post endpoint-level events to the local endpoint + * - this includes debug and version messages + */ +static void rxrpc_post_packet_to_local(struct rxrpc_local *local, + struct sk_buff *skb) +{ + _enter("%p,%p", local, skb); + + if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { + skb_queue_tail(&local->event_queue, skb); + rxrpc_queue_local(local); + } else { + rxrpc_free_skb(skb, rxrpc_skb_put_input); + } +} + +/* + * put a packet up for transport-level abort + */ +static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) +{ + if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { + skb_queue_tail(&local->reject_queue, skb); + rxrpc_queue_local(local); + } else { + rxrpc_free_skb(skb, rxrpc_skb_put_input); + } +} + +/* + * Extract the wire header from a packet and translate the byte order. + */ +static noinline +int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb) +{ + struct rxrpc_wire_header whdr; + + /* dig out the RxRPC connection details */ + if (skb_copy_bits(skb, 0, &whdr, sizeof(whdr)) < 0) { + trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, + tracepoint_string("bad_hdr")); + return -EBADMSG; + } + + memset(sp, 0, sizeof(*sp)); + sp->hdr.epoch = ntohl(whdr.epoch); + sp->hdr.cid = ntohl(whdr.cid); + sp->hdr.callNumber = ntohl(whdr.callNumber); + sp->hdr.seq = ntohl(whdr.seq); + sp->hdr.serial = ntohl(whdr.serial); + sp->hdr.flags = whdr.flags; + sp->hdr.type = whdr.type; + sp->hdr.userStatus = whdr.userStatus; + sp->hdr.securityIndex = whdr.securityIndex; + sp->hdr._rsvd = ntohs(whdr._rsvd); + sp->hdr.serviceId = ntohs(whdr.serviceId); + return 0; +} + +/* + * Extract the abort code from an ABORT packet and stash it in skb->priority. + */ +static bool rxrpc_extract_abort(struct sk_buff *skb) +{ + __be32 wtmp; + + if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), + &wtmp, sizeof(wtmp)) < 0) + return false; + skb->priority = ntohl(wtmp); + return true; +} + +/* + * handle data received on the local endpoint + * - may be called in interrupt context + * + * [!] Note that as this is called from the encap_rcv hook, the socket is not + * held locked by the caller and nothing prevents sk_user_data on the UDP from + * being cleared in the middle of processing this function. + * + * Called with the RCU read lock held from the IP layer via UDP. + */ +int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) +{ + struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk); + struct rxrpc_connection *conn; + struct rxrpc_channel *chan; + struct rxrpc_call *call = NULL; + struct rxrpc_skb_priv *sp; + struct rxrpc_peer *peer = NULL; + struct rxrpc_sock *rx = NULL; + unsigned int channel; + + _enter("%p", udp_sk); + + if (unlikely(!local)) { + kfree_skb(skb); + return 0; + } + if (skb->tstamp == 0) + skb->tstamp = ktime_get_real(); + + rxrpc_new_skb(skb, rxrpc_skb_new_encap_rcv); + + skb_pull(skb, sizeof(struct udphdr)); + + /* The UDP protocol already released all skb resources; + * we are free to add our own data there. + */ + sp = rxrpc_skb(skb); + + /* dig out the RxRPC connection details */ + if (rxrpc_extract_header(sp, skb) < 0) + goto bad_message; + + if (IS_ENABLED(CONFIG_AF_RXRPC_INJECT_LOSS)) { + static int lose; + if ((lose++ & 7) == 7) { + trace_rxrpc_rx_lose(sp); + rxrpc_free_skb(skb, rxrpc_skb_put_lose); + return 0; + } + } + + if (skb->tstamp == 0) + skb->tstamp = ktime_get_real(); + trace_rxrpc_rx_packet(sp); + + switch (sp->hdr.type) { + case RXRPC_PACKET_TYPE_VERSION: + if (rxrpc_to_client(sp)) + goto discard; + rxrpc_post_packet_to_local(local, skb); + goto out; + + case RXRPC_PACKET_TYPE_BUSY: + if (rxrpc_to_server(sp)) + goto discard; + fallthrough; + case RXRPC_PACKET_TYPE_ACK: + case RXRPC_PACKET_TYPE_ACKALL: + if (sp->hdr.callNumber == 0) + goto bad_message; + break; + case RXRPC_PACKET_TYPE_ABORT: + if (!rxrpc_extract_abort(skb)) + return true; /* Just discard if malformed */ + break; + + case RXRPC_PACKET_TYPE_DATA: + if (sp->hdr.callNumber == 0 || + sp->hdr.seq == 0) + goto bad_message; + + /* Unshare the packet so that it can be modified for in-place + * decryption. + */ + if (sp->hdr.securityIndex != 0) { + struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC); + if (!nskb) { + rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem); + goto out; + } + + if (nskb != skb) { + rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare); + skb = nskb; + rxrpc_new_skb(skb, rxrpc_skb_new_unshared); + sp = rxrpc_skb(skb); + } + } + break; + + case RXRPC_PACKET_TYPE_CHALLENGE: + if (rxrpc_to_server(sp)) + goto discard; + break; + case RXRPC_PACKET_TYPE_RESPONSE: + if (rxrpc_to_client(sp)) + goto discard; + break; + + /* Packet types 9-11 should just be ignored. */ + case RXRPC_PACKET_TYPE_PARAMS: + case RXRPC_PACKET_TYPE_10: + case RXRPC_PACKET_TYPE_11: + goto discard; + + default: + goto bad_message; + } + + if (sp->hdr.serviceId == 0) + goto bad_message; + + if (rxrpc_to_server(sp)) { + /* Weed out packets to services we're not offering. Packets + * that would begin a call are explicitly rejected and the rest + * are just discarded. + */ + rx = rcu_dereference(local->service); + if (!rx || (sp->hdr.serviceId != rx->srx.srx_service && + sp->hdr.serviceId != rx->second_service)) { + if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA && + sp->hdr.seq == 1) + goto unsupported_service; + goto discard; + } + } + + conn = rxrpc_find_connection_rcu(local, skb, &peer); + if (conn) { + if (sp->hdr.securityIndex != conn->security_ix) + goto wrong_security; + + if (sp->hdr.serviceId != conn->service_id) { + int old_id; + + if (!test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags)) + goto reupgrade; + old_id = cmpxchg(&conn->service_id, conn->orig_service_id, + sp->hdr.serviceId); + + if (old_id != conn->orig_service_id && + old_id != sp->hdr.serviceId) + goto reupgrade; + } + + if (sp->hdr.callNumber == 0) { + /* Connection-level packet */ + _debug("CONN %p {%d}", conn, conn->debug_id); + rxrpc_post_packet_to_conn(conn, skb); + goto out; + } + + if ((int)sp->hdr.serial - (int)conn->hi_serial > 0) + conn->hi_serial = sp->hdr.serial; + + /* Call-bound packets are routed by connection channel. */ + channel = sp->hdr.cid & RXRPC_CHANNELMASK; + chan = &conn->channels[channel]; + + /* Ignore really old calls */ + if (sp->hdr.callNumber < chan->last_call) + goto discard; + + if (sp->hdr.callNumber == chan->last_call) { + if (chan->call || + sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) + goto discard; + + /* For the previous service call, if completed + * successfully, we discard all further packets. + */ + if (rxrpc_conn_is_service(conn) && + chan->last_type == RXRPC_PACKET_TYPE_ACK) + goto discard; + + /* But otherwise we need to retransmit the final packet + * from data cached in the connection record. + */ + if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA) + trace_rxrpc_rx_data(chan->call_debug_id, + sp->hdr.seq, + sp->hdr.serial, + sp->hdr.flags); + rxrpc_post_packet_to_conn(conn, skb); + goto out; + } + + call = rcu_dereference(chan->call); + + if (sp->hdr.callNumber > chan->call_id) { + if (rxrpc_to_client(sp)) + goto reject_packet; + if (call) + rxrpc_input_implicit_end_call(rx, conn, call); + call = NULL; + } + + if (call) { + if (sp->hdr.serviceId != call->service_id) + call->service_id = sp->hdr.serviceId; + if ((int)sp->hdr.serial - (int)call->rx_serial > 0) + call->rx_serial = sp->hdr.serial; + if (!test_bit(RXRPC_CALL_RX_HEARD, &call->flags)) + set_bit(RXRPC_CALL_RX_HEARD, &call->flags); + } + } + + if (!call || refcount_read(&call->ref) == 0) { + if (rxrpc_to_client(sp) || + sp->hdr.type != RXRPC_PACKET_TYPE_DATA) + goto bad_message; + if (sp->hdr.seq != 1) + goto discard; + call = rxrpc_new_incoming_call(local, rx, skb); + if (!call) + goto reject_packet; + } + + /* Process a call packet; this either discards or passes on the ref + * elsewhere. + */ + rxrpc_input_call_packet(call, skb); + goto out; + +discard: + rxrpc_free_skb(skb, rxrpc_skb_put_input); +out: + trace_rxrpc_rx_done(0, 0); + return 0; + +wrong_security: + trace_rxrpc_abort(0, "SEC", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, + RXKADINCONSISTENCY, EBADMSG); + skb->priority = RXKADINCONSISTENCY; + goto post_abort; + +unsupported_service: + trace_rxrpc_abort(0, "INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, + RX_INVALID_OPERATION, EOPNOTSUPP); + skb->priority = RX_INVALID_OPERATION; + goto post_abort; + +reupgrade: + trace_rxrpc_abort(0, "UPG", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, + RX_PROTOCOL_ERROR, EBADMSG); + goto protocol_error; + +bad_message: + trace_rxrpc_abort(0, "BAD", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, + RX_PROTOCOL_ERROR, EBADMSG); +protocol_error: + skb->priority = RX_PROTOCOL_ERROR; +post_abort: + skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; +reject_packet: + trace_rxrpc_rx_done(skb->mark, skb->priority); + rxrpc_reject_packet(local, skb); + _leave(" [badmsg]"); + return 0; +} From patchwork Wed Nov 30 16:56:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27896 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1040856wrr; Wed, 30 Nov 2022 09:03:13 -0800 (PST) X-Google-Smtp-Source: AA0mqf69OVzFYspeXy5bXwgVhEJ/8ha3bbIaiSROSgEdR6mI91CKZ5DlkJQ6D5xHBYS0tz9uBoOg X-Received: by 2002:a05:6402:28ac:b0:46a:b8d0:a052 with SMTP id eg44-20020a05640228ac00b0046ab8d0a052mr24295746edb.399.1669827793357; Wed, 30 Nov 2022 09:03:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827793; cv=none; d=google.com; s=arc-20160816; b=KTp3tWioQuUAirdCUiYrFsQyeN1RAsJJ7WpME3S1lwjWGVQxWdgWyyCtyiRX0IAkHR //e5Tx7IrZ6udH8euEXxfQLZkCgro7xlhuE/W2224gt+GTx99gdFFTAobAkqY+r9UoDK kebEOOIjZGlx4Dukv9YxryeWqn39bFj0lgXiSd57Ggzh/xWvcgv6P9qqvRCx/Pv0zpPq RTISX8km79k05biQ3SkFJq3KmBf+o1vK/rl6/PJGqb53jEX1OkCs4tEF3U0MQ/Fr6sNv YPsGgFi3j9tSwcNPWUVCWpDmhor1daSsfZCeMudW24eqN3aH7rPmpx3CVPJREVo34Xqw LtUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=f0bYOASdO/Li33R/J7KbVGZP6aCzO4ecxhhHKkbaLSU=; b=GgmE4diZ9F7CQdp7rqOOl0y2o5sI8ackQ2hJlcf15qAB6PujdORCezpPYPIL7LHNxZ QNLaY2lZGwZuvkQ4wp/W0y/PdMnBBawdEd5exq+Ctq0kxeQR63XinpwjpQUpLnA8WBUw GifJTxLhYRPvNdaMi25VeF8G2O3tFl7k41V1DFAUQJOi9gh7LggrJX0pIM/Y36mY+FQ3 2YMtPXlcA6h1E0m5NdFkKNIHBO+oL6xojx9u1smc9NCpcwS0AGgWqpkCigKjaf+DdPRM 4iCtpw+6DMVSRwk+pgFiJZ1LUHlzSbTnfyfW+xJHrsBnmD2dI+iA3p/nvoGfTswVn3Py JD1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GFkW06uJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gt13-20020a1709072d8d00b007b2a6aaff06si2055916ejc.50.2022.11.30.09.02.41; Wed, 30 Nov 2022 09:03:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GFkW06uJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230476AbiK3Q7h (ORCPT + 99 others); Wed, 30 Nov 2022 11:59:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229632AbiK3Q6x (ORCPT ); Wed, 30 Nov 2022 11:58:53 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53CDD8DFFD for ; Wed, 30 Nov 2022 08:57:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827423; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f0bYOASdO/Li33R/J7KbVGZP6aCzO4ecxhhHKkbaLSU=; b=GFkW06uJzJYDTSuoAplOOfu3vJ9sidRXiBiIC3Jw6L0kQ4zhtZuTM1LUhdNs96XoqjPJVY Tm2LjcPliv2WNxC3WQTEr9G/G6E40ocAqsnYQicjDKGioqqGoQtM2cXP1WfGS1W6SrPgMb ernrJtxtEDrHaoDw5adeVQmoQ+Xi2wc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-408-xEDSLDEyOtWlPzoXJ0NU-Q-1; Wed, 30 Nov 2022 11:56:58 -0500 X-MC-Unique: xEDSLDEyOtWlPzoXJ0NU-Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DF3EB8027F5; Wed, 30 Nov 2022 16:56:57 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2BBE540C6EC4; Wed, 30 Nov 2022 16:56:57 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 18/35] rxrpc: Create a per-local endpoint receive queue and I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:56:54 +0000 Message-ID: <166982741440.621383.4325041430555712070.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941348246986607?= X-GMAIL-MSGID: =?utf-8?q?1750941348246986607?= Create a per-local receive queue to which, in a future patch, all incoming packets will be directed and an I/O thread that will process those packets and perform all transmission of packets. Destruction of the local endpoint is also moved from the local processor work item (which will be absorbed) to the thread. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 10 +++++++++ net/rxrpc/io_thread.c | 51 +++++++++++++++++++++++++++++++++++++++++++++- net/rxrpc/local_object.c | 39 ++++++++++++++++++++--------------- net/rxrpc/proc.c | 12 ++++++++--- 4 files changed, 91 insertions(+), 21 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 523cc9c5ab12..de82c25956a6 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -110,6 +110,8 @@ struct rxrpc_net { atomic_t stat_rx_acks[256]; atomic_t stat_why_req_ack[8]; + + atomic_t stat_io_loop; }; /* @@ -280,12 +282,14 @@ struct rxrpc_local { struct hlist_node link; struct socket *socket; /* my UDP socket */ struct work_struct processor; + struct task_struct *io_thread; struct list_head ack_tx_queue; /* List of ACKs that need sending */ spinlock_t ack_tx_lock; /* ACK list lock */ struct rxrpc_sock __rcu *service; /* Service(s) listening on this endpoint */ struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */ struct sk_buff_head reject_queue; /* packets awaiting rejection */ struct sk_buff_head event_queue; /* endpoint event packets awaiting processing */ + struct sk_buff_head rx_queue; /* Received packets */ struct rb_root client_bundles; /* Client connection bundles by socket params */ spinlock_t client_bundles_lock; /* Lock for client_bundles */ spinlock_t lock; /* access lock */ @@ -954,6 +958,11 @@ void rxrpc_input_implicit_end_call(struct rxrpc_sock *, struct rxrpc_connection * io_thread.c */ int rxrpc_input_packet(struct sock *, struct sk_buff *); +int rxrpc_io_thread(void *data); +static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local) +{ + wake_up_process(local->io_thread); +} /* * insecure.c @@ -984,6 +993,7 @@ void rxrpc_put_local(struct rxrpc_local *, enum rxrpc_local_trace); struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *, enum rxrpc_local_trace); void rxrpc_unuse_local(struct rxrpc_local *, enum rxrpc_local_trace); void rxrpc_queue_local(struct rxrpc_local *); +void rxrpc_destroy_local(struct rxrpc_local *local); void rxrpc_destroy_all_locals(struct rxrpc_net *); static inline bool __rxrpc_unuse_local(struct rxrpc_local *local, diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index d2aaad5afa1d..0b3e096e3d50 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* RxRPC packet reception * - * Copyright (C) 2007, 2016 Red Hat, Inc. All Rights Reserved. + * Copyright (C) 2007, 2016, 2022 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) */ @@ -368,3 +368,52 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) _leave(" [badmsg]"); return 0; } + +/* + * I/O and event handling thread. + */ +int rxrpc_io_thread(void *data) +{ + struct sk_buff_head rx_queue; + struct rxrpc_local *local = data; + struct sk_buff *skb; + + skb_queue_head_init(&rx_queue); + + set_user_nice(current, MIN_NICE); + + for (;;) { + rxrpc_inc_stat(local->rxnet, stat_io_loop); + + /* Process received packets and errors. */ + if ((skb = __skb_dequeue(&rx_queue))) { + // TODO: Input packet + rxrpc_free_skb(skb, rxrpc_skb_put_input); + continue; + } + + if (!skb_queue_empty(&local->rx_queue)) { + spin_lock_irq(&local->rx_queue.lock); + skb_queue_splice_tail_init(&local->rx_queue, &rx_queue); + spin_unlock_irq(&local->rx_queue.lock); + continue; + } + + set_current_state(TASK_INTERRUPTIBLE); + if (!skb_queue_empty(&local->rx_queue)) { + __set_current_state(TASK_RUNNING); + continue; + } + + if (kthread_should_stop()) + break; + schedule(); + } + + __set_current_state(TASK_RUNNING); + rxrpc_see_local(local, rxrpc_local_stop); + rxrpc_destroy_local(local); + local->io_thread = NULL; + rxrpc_see_local(local, rxrpc_local_stopped); + return 0; +} diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 1617ce651b9b..7c61349984e3 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -103,6 +103,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, init_rwsem(&local->defrag_sem); skb_queue_head_init(&local->reject_queue); skb_queue_head_init(&local->event_queue); + skb_queue_head_init(&local->rx_queue); local->client_bundles = RB_ROOT; spin_lock_init(&local->client_bundles_lock); spin_lock_init(&local->lock); @@ -126,6 +127,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net) struct udp_tunnel_sock_cfg tuncfg = {NULL}; struct sockaddr_rxrpc *srx = &local->srx; struct udp_port_cfg udp_conf = {0}; + struct task_struct *io_thread; struct sock *usk; int ret; @@ -185,8 +187,23 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net) BUG(); } + io_thread = kthread_run(rxrpc_io_thread, local, + "krxrpcio/%u", ntohs(udp_conf.local_udp_port)); + if (IS_ERR(io_thread)) { + ret = PTR_ERR(io_thread); + goto error_sock; + } + + local->io_thread = io_thread; _leave(" = 0"); return 0; + +error_sock: + kernel_sock_shutdown(local->socket, SHUT_RDWR); + local->socket->sk->sk_user_data = NULL; + sock_release(local->socket); + local->socket = NULL; + return ret; } /* @@ -360,19 +377,8 @@ struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local, */ void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why) { - unsigned int debug_id; - int r, u; - - if (local) { - debug_id = local->debug_id; - r = refcount_read(&local->ref); - u = atomic_dec_return(&local->active_users); - trace_rxrpc_local(debug_id, why, r, u); - if (u == 0) { - rxrpc_get_local(local, rxrpc_local_get_queue); - rxrpc_queue_local(local); - } - } + if (local && __rxrpc_unuse_local(local, why)) + kthread_stop(local->io_thread); } /* @@ -382,7 +388,7 @@ void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why) * Closing the socket cannot be done from bottom half context or RCU callback * context because it might sleep. */ -static void rxrpc_local_destroyer(struct rxrpc_local *local) +void rxrpc_destroy_local(struct rxrpc_local *local) { struct socket *socket = local->socket; struct rxrpc_net *rxnet = local->rxnet; @@ -411,6 +417,7 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local) */ rxrpc_purge_queue(&local->reject_queue); rxrpc_purge_queue(&local->event_queue); + rxrpc_purge_queue(&local->rx_queue); } /* @@ -430,10 +437,8 @@ static void rxrpc_local_processor(struct work_struct *work) do { again = false; - if (!__rxrpc_use_local(local, rxrpc_local_use_work)) { - rxrpc_local_destroyer(local); + if (!__rxrpc_use_local(local, rxrpc_local_use_work)) break; - } if (!list_empty(&local->ack_tx_queue)) { rxrpc_transmit_ack_packets(local); diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index d3a6d24cf871..35d5b43c677e 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -342,7 +342,7 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v) if (v == SEQ_START_TOKEN) { seq_puts(seq, "Proto Local " - " Use Act\n"); + " Use Act RxQ\n"); return 0; } @@ -351,10 +351,11 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v) sprintf(lbuff, "%pISpc", &local->srx.transport); seq_printf(seq, - "UDP %-47.47s %3u %3u\n", + "UDP %-47.47s %3u %3u %3u\n", lbuff, refcount_read(&local->ref), - atomic_read(&local->active_users)); + atomic_read(&local->active_users), + local->rx_queue.qlen); return 0; } @@ -463,6 +464,9 @@ int rxrpc_stats_show(struct seq_file *seq, void *v) "Buffers : txb=%u rxb=%u\n", atomic_read(&rxrpc_nr_txbuf), atomic_read(&rxrpc_n_rx_skbs)); + seq_printf(seq, + "IO-thread: loops=%u\n", + atomic_read(&rxnet->stat_io_loop)); return 0; } @@ -492,5 +496,7 @@ int rxrpc_stats_clear(struct file *file, char *buf, size_t size) memset(&rxnet->stat_rx_acks, 0, sizeof(rxnet->stat_rx_acks)); memset(&rxnet->stat_why_req_ack, 0, sizeof(rxnet->stat_why_req_ack)); + + atomic_set(&rxnet->stat_io_loop, 0); return size; } From patchwork Wed Nov 30 16:57:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27891 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1040100wrr; Wed, 30 Nov 2022 09:02:07 -0800 (PST) X-Google-Smtp-Source: AA0mqf6xR/ulWJ1zmzxL9vA8cKXLha3YY+fRXBYeZuQivw+QNj8YzUNbf3NNcpmPI7QTehsEOrPo X-Received: by 2002:a05:6402:110d:b0:469:dd6:bfee with SMTP id u13-20020a056402110d00b004690dd6bfeemr40800688edv.330.1669827726824; Wed, 30 Nov 2022 09:02:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827726; cv=none; d=google.com; s=arc-20160816; b=Jd/8/zJek8bj1KyrKzIkeVHgw+9v7fTkeaxsKyEeCMvOUrXu01Rp3Fdiev+tVrrAtr DLpzqZqan9bSMlReF3KM86x7GLzf2nWvH1oRHY9jC83vm/nmvpf/mvLFeZdyafmV5iKb WiZ7BWdLv5a9NVb4VDuOMNB+nl/HzuO4Ou4Rsmrz6o8yCP9WnzRqbf6vAIDGi/0QHPBO MeQJbFBfPnMuYigod0d1H2mNlHcVKFGBAueb1X0YVL1vh5tZjhgToBRuVcl1k/9snNfh 2rppP2JPsF6X0zPoHMh7IYAzX/SOW49us/bzqJDwlGm3o0AKSpsDc6BBx41JJHuO8r5V YuCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=IITIpng4+2y93iZ20I7Kz1tWXCOyAVbwMwu5YEVC3zM=; b=TqXhav2huEKSkyKcpAMI5gSca1UJw4XevTFHksMKulfN2KgrEqobnHiv/4EhOnhhcZ VRRPQ/4dweWfYKUuD0A5ogHsfycEYiumDFYigb4799OXVQqz2MsquBCfLmR/9nhUef2J Q+1alM7WUHfgFYGOZDznGPA71y5btNODrY5BBPUrs+eDqpzdTXA5JqstL/QJmTeujwkG xbNwZ6RzdCsaRJ9AbVWaTYz4ikeczYpNan4I8QTUsybhZcLCb8n/1gPsbfzKqzsepGDc I4CjV4OkPPE1yiRbMBElGFVML6BlgAWL0a+0T5ILYvpjYOdpN1qSmlcFlEgdjQWM2I/6 +QKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=R8ViIgIU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hp30-20020a1709073e1e00b007ae0e8f59a6si1758744ejc.821.2022.11.30.09.01.42; Wed, 30 Nov 2022 09:02:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=R8ViIgIU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230401AbiK3Q7s (ORCPT + 99 others); Wed, 30 Nov 2022 11:59:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230215AbiK3Q7D (ORCPT ); Wed, 30 Nov 2022 11:59:03 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2E7591C2D for ; Wed, 30 Nov 2022 08:57:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827430; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IITIpng4+2y93iZ20I7Kz1tWXCOyAVbwMwu5YEVC3zM=; b=R8ViIgIULh6SCDjaQuyyuQl3FN+03Lq8ozGwjSv1GgN0eL+W2l0xglyoeg1L7jtsmCXBxv b6uxk4sghKIlnI82Pfm0u7JefwGs1E4P57Hw9bBsImIHftoHSbXsUEPwAWY3d4xL7EvIi/ /0vbn5SoS8uEbewSAEIrIC0mTBaYZ4g= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-153-uCF7qVsIMNeaFeVZH8vKlQ-1; Wed, 30 Nov 2022 11:57:08 -0500 X-MC-Unique: uCF7qVsIMNeaFeVZH8vKlQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 622833C1104F; Wed, 30 Nov 2022 16:57:06 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id A361F492B04; Wed, 30 Nov 2022 16:57:05 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 19/35] rxrpc: Move packet reception processing into I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:57:03 +0000 Message-ID: <166982742306.621383.3217124311708208721.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941278254280582?= X-GMAIL-MSGID: =?utf-8?q?1750941278254280582?= Split the packet input handler to make the softirq side just dump the received packet into the local endpoint receive queue and then call the remainder of the input function from the I/O thread. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 3 ++ net/rxrpc/call_event.c | 4 ++- net/rxrpc/call_object.c | 2 +- net/rxrpc/io_thread.c | 61 +++++++++++++++++++++++++++++++--------------- net/rxrpc/local_object.c | 2 +- 5 files changed, 47 insertions(+), 25 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index de82c25956a6..044815ba2b49 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -36,6 +36,7 @@ struct rxrpc_txbuf; * to pass supplementary information. */ enum rxrpc_skb_mark { + RXRPC_SKB_MARK_PACKET, /* Received packet */ RXRPC_SKB_MARK_REJECT_BUSY, /* Reject with BUSY */ RXRPC_SKB_MARK_REJECT_ABORT, /* Reject with ABORT (code in skb->priority) */ }; @@ -957,7 +958,7 @@ void rxrpc_input_implicit_end_call(struct rxrpc_sock *, struct rxrpc_connection /* * io_thread.c */ -int rxrpc_input_packet(struct sock *, struct sk_buff *); +int rxrpc_encap_rcv(struct sock *, struct sk_buff *); int rxrpc_io_thread(void *data); static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local) { diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 049b92b1c040..3925b55e2064 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -83,7 +83,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]); txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK, - in_softirq() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS); + rcu_read_lock_held() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS); if (!txb) { kleave(" = -ENOMEM"); return; @@ -111,7 +111,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, spin_unlock_bh(&local->ack_tx_lock); trace_rxrpc_send_ack(call, why, ack_reason, serial); - if (in_task()) { + if (!rcu_read_lock_held()) { rxrpc_transmit_ack_packets(call->peer->local); } else { rxrpc_get_local(local, rxrpc_local_get_queue); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 9cd7e0190ef4..57c8d4cc900a 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -632,7 +632,7 @@ void rxrpc_cleanup_call(struct rxrpc_call *call) del_timer_sync(&call->timer); cancel_work(&call->processor); - if (in_softirq() || work_busy(&call->processor)) + if (rcu_read_lock_held() || work_busy(&call->processor)) /* Can't use the rxrpc workqueue as we need to cancel/flush * something that may be running/waiting there. */ diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 0b3e096e3d50..ee2e36c46ae2 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -9,6 +9,34 @@ #include "ar-internal.h" +/* + * handle data received on the local endpoint + * - may be called in interrupt context + * + * [!] Note that as this is called from the encap_rcv hook, the socket is not + * held locked by the caller and nothing prevents sk_user_data on the UDP from + * being cleared in the middle of processing this function. + * + * Called with the RCU read lock held from the IP layer via UDP. + */ +int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb) +{ + struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk); + + if (unlikely(!local)) { + kfree_skb(skb); + return 0; + } + if (skb->tstamp == 0) + skb->tstamp = ktime_get_real(); + + skb->mark = RXRPC_SKB_MARK_PACKET; + rxrpc_new_skb(skb, rxrpc_skb_new_encap_rcv); + skb_queue_tail(&local->rx_queue, skb); + rxrpc_wake_up_io_thread(local); + return 0; +} + /* * post connection-level events to the connection * - this includes challenges, responses, some aborts and call terminal packet @@ -98,18 +126,10 @@ static bool rxrpc_extract_abort(struct sk_buff *skb) } /* - * handle data received on the local endpoint - * - may be called in interrupt context - * - * [!] Note that as this is called from the encap_rcv hook, the socket is not - * held locked by the caller and nothing prevents sk_user_data on the UDP from - * being cleared in the middle of processing this function. - * - * Called with the RCU read lock held from the IP layer via UDP. + * Process packets received on the local endpoint */ -int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) +static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) { - struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk); struct rxrpc_connection *conn; struct rxrpc_channel *chan; struct rxrpc_call *call = NULL; @@ -118,17 +138,9 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb) struct rxrpc_sock *rx = NULL; unsigned int channel; - _enter("%p", udp_sk); - - if (unlikely(!local)) { - kfree_skb(skb); - return 0; - } if (skb->tstamp == 0) skb->tstamp = ktime_get_real(); - rxrpc_new_skb(skb, rxrpc_skb_new_encap_rcv); - skb_pull(skb, sizeof(struct udphdr)); /* The UDP protocol already released all skb resources; @@ -387,8 +399,17 @@ int rxrpc_io_thread(void *data) /* Process received packets and errors. */ if ((skb = __skb_dequeue(&rx_queue))) { - // TODO: Input packet - rxrpc_free_skb(skb, rxrpc_skb_put_input); + switch (skb->mark) { + case RXRPC_SKB_MARK_PACKET: + rcu_read_lock(); + rxrpc_input_packet(local, skb); + rcu_read_unlock(); + break; + default: + WARN_ON_ONCE(1); + rxrpc_free_skb(skb, rxrpc_skb_put_unknown); + break; + } continue; } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 7c61349984e3..6b4d77219f36 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -154,7 +154,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net) } tuncfg.encap_type = UDP_ENCAP_RXRPC; - tuncfg.encap_rcv = rxrpc_input_packet; + tuncfg.encap_rcv = rxrpc_encap_rcv; tuncfg.encap_err_rcv = rxrpc_encap_err_rcv; tuncfg.sk_user_data = local; setup_udp_tunnel_sock(net, local->socket, &tuncfg); From patchwork Wed Nov 30 16:57:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27904 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042062wrr; Wed, 30 Nov 2022 09:04:53 -0800 (PST) X-Google-Smtp-Source: AA0mqf4rDi+ZsUSeQ45yIEItpmk2ZSH4691qkseEHih9h3qtUOQDrgwxbtliACiXeAVvrnGLZ5qh X-Received: by 2002:a05:6402:538a:b0:458:fbd9:e3b1 with SMTP id ew10-20020a056402538a00b00458fbd9e3b1mr482328edb.6.1669827893291; Wed, 30 Nov 2022 09:04:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827893; cv=none; d=google.com; s=arc-20160816; b=hzs/SQ6OO/+3BBjVuiQZtGp3i7QpiGcdfmEIRnSOpFKaRPGGumtowRtXQMWxsTFVjA TgUZdG92M6E5LJloBEUW0e++rldOTzoHMAg29h8f0jx6sOnCP05Du3dv+is9y3gaf/8j 9LZtC6Xu4+HJQrp0+WmauQlu18qqhndCRq4eRLNe+0Lhr5uUy25H5gDpwDMHgcqHtgf6 TSCTNEPqv7CRGgDElt9BwEeGfei3KBWfxJHkg1obIzf1GSWlll+QsZ9KkGDTE5KRBnkf rJb+ZHtnAaxnKQD6MkA0ctTZmGDjHEhe3hji6rdoFn+7q1eDIkOkZxMeA9G30Wswb/7c VLNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=9Me+ROeT49aCKJ3azuUlnNHIqF71bOfLTZWQmzO3ZSU=; b=W2dZBVkxhM+x2X5jChFf8ikDnpLOD6L41zWR3vYpS9j3jowVh0IV51NYzRFE1KCUen GiUuQzsXfs/Ho7nQSdUXoFx+1xI6ST6jYSdlv0R1pJv5z7ZO05lAKFKePW2b7DFn4uiU m1/hXNkUF+kFIAcfK0pdRvg06Zr3lxdrs2JFf82JjM4rtLEyA26eA7ZGeXIhtxjipuEm 65qfGH1fZbsj7EwL2bKyVb2wOexy3MIp4Y+y2oD5VJ7aaOSrGhaxsWT9XnsJiahdGJ85 3crkDTIIN/jGQ+lG14FwjTokEOZ7Pk44SG8Ei+KBudx+ZnSxGRxeyg31QnHEV7n/F+El I1cA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jNhndTWC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dd19-20020a1709069b9300b0078d48e06641si1709906ejc.393.2022.11.30.09.04.29; Wed, 30 Nov 2022 09:04:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jNhndTWC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230468AbiK3RAO (ORCPT + 99 others); Wed, 30 Nov 2022 12:00:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230028AbiK3Q7Q (ORCPT ); Wed, 30 Nov 2022 11:59:16 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55E4693A6B for ; Wed, 30 Nov 2022 08:57:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827438; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9Me+ROeT49aCKJ3azuUlnNHIqF71bOfLTZWQmzO3ZSU=; b=jNhndTWCcdt4Q0vRpymB8TbfRi7ygXj2M2SyzcST8prFbUXJT0A8biVlXxu9UKwrfA6akt G5EPh1m/unganpHr/V+o9Q42yhYy2dUuCgjOKp4L9x1N/xKzzscTeotoXZYBe5mRTLZ7H4 1UtQ1Fsk91njjURjG5Iq1uG8GnUkaYc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-411-BWCX2JJYPcm2U7mkjPWgwA-1; Wed, 30 Nov 2022 11:57:15 -0500 X-MC-Unique: BWCX2JJYPcm2U7mkjPWgwA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B8013886065; Wed, 30 Nov 2022 16:57:14 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 05E5F1401C25; Wed, 30 Nov 2022 16:57:13 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 20/35] rxrpc: Move error processing into the local endpoint I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:57:11 +0000 Message-ID: <166982743155.621383.1445954662790362436.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941453099449427?= X-GMAIL-MSGID: =?utf-8?q?1750941453099449427?= Move the processing of error packets into the local endpoint I/O thread, leaving the handover from UDP to merely transfer them into the local endpoint queue. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 4 +++- net/rxrpc/io_thread.c | 29 +++++++++++++++++++++++++++++ net/rxrpc/peer_event.c | 41 ++++++----------------------------------- 3 files changed, 38 insertions(+), 36 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 044815ba2b49..566377c64184 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -37,6 +37,7 @@ struct rxrpc_txbuf; */ enum rxrpc_skb_mark { RXRPC_SKB_MARK_PACKET, /* Received packet */ + RXRPC_SKB_MARK_ERROR, /* Error notification */ RXRPC_SKB_MARK_REJECT_BUSY, /* Reject with BUSY */ RXRPC_SKB_MARK_REJECT_ABORT, /* Reject with ABORT (code in skb->priority) */ }; @@ -959,6 +960,7 @@ void rxrpc_input_implicit_end_call(struct rxrpc_sock *, struct rxrpc_connection * io_thread.c */ int rxrpc_encap_rcv(struct sock *, struct sk_buff *); +void rxrpc_error_report(struct sock *); int rxrpc_io_thread(void *data); static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local) { @@ -1063,7 +1065,7 @@ void rxrpc_send_keepalive(struct rxrpc_peer *); /* * peer_event.c */ -void rxrpc_error_report(struct sock *); +void rxrpc_input_error(struct rxrpc_local *, struct sk_buff *); void rxrpc_peer_keepalive_worker(struct work_struct *); /* diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index ee2e36c46ae2..416c6101cf78 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -37,6 +37,31 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb) return 0; } +/* + * Handle an error received on the local endpoint. + */ +void rxrpc_error_report(struct sock *sk) +{ + struct rxrpc_local *local; + struct sk_buff *skb; + + rcu_read_lock(); + local = rcu_dereference_sk_user_data(sk); + if (unlikely(!local)) { + rcu_read_unlock(); + return; + } + + while ((skb = skb_dequeue(&sk->sk_error_queue))) { + skb->mark = RXRPC_SKB_MARK_ERROR; + rxrpc_new_skb(skb, rxrpc_skb_new_error_report); + skb_queue_tail(&local->rx_queue, skb); + } + + rxrpc_wake_up_io_thread(local); + rcu_read_unlock(); +} + /* * post connection-level events to the connection * - this includes challenges, responses, some aborts and call terminal packet @@ -405,6 +430,10 @@ int rxrpc_io_thread(void *data) rxrpc_input_packet(local, skb); rcu_read_unlock(); break; + case RXRPC_SKB_MARK_ERROR: + rxrpc_input_error(local, skb); + rxrpc_free_skb(skb, rxrpc_skb_put_error_report); + break; default: WARN_ON_ONCE(1); rxrpc_free_skb(skb, rxrpc_skb_put_unknown); diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index f35cfc458dcf..94f63fb1bd67 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -131,51 +131,26 @@ static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu) /* * Handle an error received on the local endpoint. */ -void rxrpc_error_report(struct sock *sk) +void rxrpc_input_error(struct rxrpc_local *local, struct sk_buff *skb) { - struct sock_exterr_skb *serr; + struct sock_exterr_skb *serr = SKB_EXT_ERR(skb); struct sockaddr_rxrpc srx; - struct rxrpc_local *local; struct rxrpc_peer *peer = NULL; - struct sk_buff *skb; - rcu_read_lock(); - local = rcu_dereference_sk_user_data(sk); - if (unlikely(!local)) { - rcu_read_unlock(); - return; - } - _enter("%p{%d}", sk, local->debug_id); + _enter("L=%x", local->debug_id); - /* Clear the outstanding error value on the socket so that it doesn't - * cause kernel_sendmsg() to return it later. - */ - sock_error(sk); - - skb = sock_dequeue_err_skb(sk); - if (!skb) { - rcu_read_unlock(); - _leave("UDP socket errqueue empty"); - return; - } - rxrpc_new_skb(skb, rxrpc_skb_new_error_report); - serr = SKB_EXT_ERR(skb); if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) { _leave("UDP empty message"); - rcu_read_unlock(); - rxrpc_free_skb(skb, rxrpc_skb_put_error_report); return; } + rcu_read_lock(); peer = rxrpc_lookup_peer_local_rcu(local, skb, &srx); if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_input_error)) peer = NULL; - if (!peer) { - rcu_read_unlock(); - rxrpc_free_skb(skb, rxrpc_skb_put_error_report); - _leave(" [no peer]"); + rcu_read_unlock(); + if (!peer) return; - } trace_rxrpc_rx_icmp(peer, &serr->ee, &srx); @@ -188,11 +163,7 @@ void rxrpc_error_report(struct sock *sk) rxrpc_store_error(peer, serr); out: - rcu_read_unlock(); - rxrpc_free_skb(skb, rxrpc_skb_put_error_report); rxrpc_put_peer(peer, rxrpc_peer_put_input_error); - - _leave(""); } /* From patchwork Wed Nov 30 16:57:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27895 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1040783wrr; Wed, 30 Nov 2022 09:03:07 -0800 (PST) X-Google-Smtp-Source: AA0mqf5q4rtQ5g8JP5PvEe+dy43TH4s/f8IPbKdaGCmutRs0x7ypFyQquN0hSnyZwErHgNcjCe5b X-Received: by 2002:a50:ed14:0:b0:46b:fb4:6b6f with SMTP id j20-20020a50ed14000000b0046b0fb46b6fmr15951013eds.237.1669827786960; Wed, 30 Nov 2022 09:03:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827786; cv=none; d=google.com; s=arc-20160816; b=VqUBgV7p9c1HXWQF5OvTwxunIONdmaoOseDK7/z7TxM+w55LVVtuHMBEj7MqatGg4O n2VD6j8K7VLVyAOgG0VSSa/k7IdpUEkplLDoIDaoGxgE0VPJXEP2fFTDX0Bf1eWDPV/M P6BBK2kk1abGnVdkqOrEqmGy3hfJMaKFCBf+8QksnCPfCTj4yMQW2UF3IHZFC4xBL1mO MUemIau5nB5RPFrCJYdrK9AvS9Pqa59+OxnX8KY/WJft0wVLNJ52vEsmw4w+C7dMLqWa BNHQh+lQd6unAL4Acn3Ww/LD0zhHCIBebbEK0Bzbqr57ge6NYyjTFAqXlYIaJDj8gPOY 69Sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=dneFRaj6mh/2RNj3JxJnemRTBaWF2WqdAUXcna3A8HA=; b=xtIrIVsmXMPlKEUiFx3dnbQt6S/YugfagEckvKAxlqcms0UaCAxoabi+kf3kSXo33W fVL/y0NzBa0GZIEGib3Aus1zkOuRiyoNWTM3yaxpHDYjuaaUrk9JRO1VMBBzWyw54waL 8E7jQmiOpz4JXC+ntU4aVoNXYzQE52rdVXbNN/siSmToqqqjAc8Bu/LnPkegSLH6KZc/ VHUNHZNAX9uO3wlKlGpXi86HAKdogZxn2LZU3+F9tUm1A8BOGiARTvKezdezUolIkRnC uooaTSADOJu89Hq4y6lmxtHkzoC1L8kp+sR4jNosBK2FD5OZa31mb46VwWR6IzlFFKnr 4UyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aiFoJoPj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y68-20020a50bb4a000000b00461a32e0e38si1611562ede.306.2022.11.30.09.02.41; Wed, 30 Nov 2022 09:03:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aiFoJoPj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230185AbiK3RAx (ORCPT + 99 others); Wed, 30 Nov 2022 12:00:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230368AbiK3Q71 (ORCPT ); Wed, 30 Nov 2022 11:59:27 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C937993A65 for ; Wed, 30 Nov 2022 08:57:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827445; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dneFRaj6mh/2RNj3JxJnemRTBaWF2WqdAUXcna3A8HA=; b=aiFoJoPj8U1rkeLTCMwagc6Q2xrWXphEE71DHRs/20UZDj4h0GAZArkf3siTFdOzku2n1I QuScqfi8aJo45I8SFhHT2Rx7V/ozJvThqs2vLrW1/bhmVv9rG6wIyIKv2Lqy8fn2YSBfUx PFcx7wTTX786/asc6EkcopDsyk794Ks= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-76-0KhjtdIiNta9JTGnMNrC0w-1; Wed, 30 Nov 2022 11:57:23 -0500 X-MC-Unique: 0KhjtdIiNta9JTGnMNrC0w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3BB8F185A792; Wed, 30 Nov 2022 16:57:23 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7BC5A40C83D9; Wed, 30 Nov 2022 16:57:22 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 21/35] rxrpc: Remove call->input_lock From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:57:19 +0000 Message-ID: <166982743990.621383.18023557629272593979.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941341477038969?= X-GMAIL-MSGID: =?utf-8?q?1750941341477038969?= Remove call->input_lock as it was only necessary to serialise access to the state stored in the rxrpc_call struct by simultaneous softirq handlers presenting received packets. They now dump the packets in a queue and a single process-context handler now processes them. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 1 - net/rxrpc/call_object.c | 1 - net/rxrpc/input.c | 22 +++++----------------- 3 files changed, 5 insertions(+), 19 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 566377c64184..654e9dab107c 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -657,7 +657,6 @@ struct rxrpc_call { rxrpc_seq_t rx_consumed; /* Highest packet consumed */ rxrpc_serial_t rx_serial; /* Highest serial received for this call */ u8 rx_winsize; /* Size of Rx window */ - spinlock_t input_lock; /* Lock for packet input to this call */ /* TCP-style slow-start congestion control [RFC5681]. Since the SMSS * is fixed, we keep these numbers in terms of segments (ie. DATA diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 57c8d4cc900a..f6d1b3a6f8c6 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -143,7 +143,6 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, init_waitqueue_head(&call->waitq); spin_lock_init(&call->notify_lock); spin_lock_init(&call->tx_lock); - spin_lock_init(&call->input_lock); spin_lock_init(&call->acks_ack_lock); rwlock_init(&call->state_lock); refcount_set(&call->ref, 1); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index f4f6f3c62d03..13c52145a926 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -588,8 +588,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) } } - spin_lock(&call->input_lock); - /* Received data implicitly ACKs all of the request packets we sent * when we're acting as a client. */ @@ -607,8 +605,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) out: trace_rxrpc_notify_socket(call->debug_id, serial); rxrpc_notify_socket(call); - - spin_unlock(&call->input_lock); rxrpc_free_skb(skb, rxrpc_skb_put_input); _leave(" [queued]"); } @@ -811,7 +807,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) offset = sizeof(struct rxrpc_wire_header); if (skb_copy_bits(skb, offset, &ack, sizeof(ack)) < 0) { rxrpc_proto_abort("XAK", call, 0); - goto out_not_locked; + goto out; } offset += sizeof(ack); @@ -863,7 +859,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_is_client_call(call)) { rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 0, -ENETRESET); - return; + goto out; } /* If we get an OUT_OF_SEQUENCE ACK from the server, that can also @@ -877,7 +873,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_is_client_call(call)) { rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 0, -ENETRESET); - return; + goto out; } /* Discard any out-of-order or duplicate ACKs (outside lock). */ @@ -885,7 +881,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, first_soft_ack, call->acks_first_seq, prev_pkt, call->acks_prev_seq); - goto out_not_locked; + goto out; } info.rxMTU = 0; @@ -893,14 +889,12 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) if (skb->len >= ioffset + sizeof(info) && skb_copy_bits(skb, ioffset, &info, sizeof(info)) < 0) { rxrpc_proto_abort("XAI", call, 0); - goto out_not_locked; + goto out; } if (nr_acks > 0) skb_condense(skb); - spin_lock(&call->input_lock); - /* Discard any out-of-order or duplicate ACKs (inside lock). */ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, @@ -992,8 +986,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_congestion_management(call, skb, &summary, acked_serial); out: - spin_unlock(&call->input_lock); -out_not_locked: rxrpc_free_skb(skb_put, rxrpc_skb_put_input); rxrpc_free_skb(skb_old, rxrpc_skb_put_ack); } @@ -1005,12 +997,8 @@ static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_ack_summary summary = { 0 }; - spin_lock(&call->input_lock); - if (rxrpc_rotate_tx_window(call, call->tx_top, &summary)) rxrpc_end_tx_phase(call, false, "ETL"); - - spin_unlock(&call->input_lock); } /* From patchwork Wed Nov 30 16:57:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27901 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1041837wrr; Wed, 30 Nov 2022 09:04:34 -0800 (PST) X-Google-Smtp-Source: AA0mqf7ysu7inypyhzr8OnEQ7dwkeuJJ8ajAViVgnWf1FXjTf9y/RJiNj7GAFvL6HCJAMFMoM1oB X-Received: by 2002:a17:906:6887:b0:7c0:a728:6aeb with SMTP id n7-20020a170906688700b007c0a7286aebmr1500496ejr.438.1669827874293; Wed, 30 Nov 2022 09:04:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827874; cv=none; d=google.com; s=arc-20160816; b=s1Ll1oSzKq7AY0GF/soqX135qzirzef6Vf68hDidulS4/vfQUfyOLfzK+0uCr2TYiL qUYSQQekO8WsAkuIO6q/7+0bu2f4e5+MXBtqhSrEimzfy2Y/8z/WTIB+o8I4zMnpFH2H smMhXU7JZtZ2XPnWWdk1bEFT+oDYzCB+ZevnGMll/ghOl6rldWkY7gm3u5rLzRhmC/EI aVoelGR9VI2fbqVF3VkJWqoExvZp3R+gNsTm4laFtkVazHORUc8wUSZZV6FkY6BG2RJX emfC2NCxCgNBCp7O8NAxaSXBA3DVOsmsNwWwqyfiKSRDhjDQfel3B4mnRpoAhxy743tN CG/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=5PNaYT+Sz99XRNBjyLDOgrtAK/xBqCVeZx4CGkNAneI=; b=nIMj7wmYIRCzKrbAKGUSTtSvazWgBdSxP+rlySRPG9OEk2htzbZg6E9sB2Wuey+ZjY bJvaiv97j7tSVNZnZtXUqHzYAPSastJFxaDRzJbeHJU91N052am6m8/q/J5ss2MLThk9 x2eTCEJy81G2K1eFgmv/V13hMx3K/RaObmcG2SGOUwxKSFk5EDNmcdQu/7suj+l3bdXM dCv7/pW1CXP7FCuBtSllGlJm4J+QcCNidNUy3lUr77JXkwUyKF0uYi8FiTWuramFI3um G7/v4DpVsFnbuJFagPY7E8NxtG3fJobOoGVfiLt7QnMa+xfciEDuUyaAsE2f7lWlgMd/ sLZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SSlacf02; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v10-20020aa7cd4a000000b0046b51067317si1604036edw.301.2022.11.30.09.04.10; Wed, 30 Nov 2022 09:04:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SSlacf02; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230025AbiK3RAn (ORCPT + 99 others); Wed, 30 Nov 2022 12:00:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229690AbiK3RAK (ORCPT ); Wed, 30 Nov 2022 12:00:10 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AA21F73 for ; Wed, 30 Nov 2022 08:57:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827457; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5PNaYT+Sz99XRNBjyLDOgrtAK/xBqCVeZx4CGkNAneI=; b=SSlacf02F53wBNabWkQ+BQ/Mb1Wxu56TPaQc02SBUp8L1ph3DpAf7mM04TzBuVVh9tIkMa z3kgHvWEvGNcIYzXwDlo2nSYI0yJJO/TNGLWehSv0JjvYRE5KW7TxsTaC+A6JPG87PrCEs hRHQLUHqN7hXCG71o4U6MsGKt1spABw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-120-mVBMQwUBP2i2fCO7O-6Gew-1; Wed, 30 Nov 2022 11:57:32 -0500 X-MC-Unique: mVBMQwUBP2i2fCO7O-6Gew-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 98F9B3C11040; Wed, 30 Nov 2022 16:57:31 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id F327A2166B26; Wed, 30 Nov 2022 16:57:30 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 22/35] rxrpc: Don't use sk->sk_receive_queue.lock to guard socket state changes From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:57:28 +0000 Message-ID: <166982744839.621383.17007015557156214209.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941432935058691?= X-GMAIL-MSGID: =?utf-8?q?1750941432935058691?= Don't use sk->sk_receive_queue.lock to guard socket state changes as the socket mutex is sufficient. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/af_rxrpc.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 7a0dc01741e7..8ad4d85acb0b 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -812,14 +812,12 @@ static int rxrpc_shutdown(struct socket *sock, int flags) lock_sock(sk); - spin_lock_bh(&sk->sk_receive_queue.lock); if (sk->sk_state < RXRPC_CLOSE) { sk->sk_state = RXRPC_CLOSE; sk->sk_shutdown = SHUTDOWN_MASK; } else { ret = -ESHUTDOWN; } - spin_unlock_bh(&sk->sk_receive_queue.lock); rxrpc_discard_prealloc(rx); @@ -872,9 +870,7 @@ static int rxrpc_release_sock(struct sock *sk) break; } - spin_lock_bh(&sk->sk_receive_queue.lock); sk->sk_state = RXRPC_CLOSE; - spin_unlock_bh(&sk->sk_receive_queue.lock); if (rx->local && rcu_access_pointer(rx->local->service) == rx) { write_lock(&rx->local->services_lock); From patchwork Wed Nov 30 16:57:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27897 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1041082wrr; Wed, 30 Nov 2022 09:03:31 -0800 (PST) X-Google-Smtp-Source: AA0mqf7dcw2kXafCAEjobSl3Vtfc1w9VP8E9eWRph+Pp1OlUDDVAfQKCFEyRHrwsiqvig7tC3DB5 X-Received: by 2002:a17:906:5442:b0:7c0:4030:ae09 with SMTP id d2-20020a170906544200b007c04030ae09mr12499263ejp.322.1669827811114; Wed, 30 Nov 2022 09:03:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827811; cv=none; d=google.com; s=arc-20160816; b=LaFKOUgldVhHsOCQhsJ37iGbcjMuqkS9Oj/A7XbhYbFxyyAgVYyH9AtTZalB72KmWp P+CzCXTFjw2FuogRIoUtmqCAsbOf7yyjWrMpGI6+RwrWPDLGlQRdOcbS3Uvlou1boqqq 4ugramhDB87HKtyRLMVbPe+8CmluCMpWg6VjZ+1PhBalMhayt9a6AIc9L7hc81jemlzy X20X/N1WG0W+gKlJu64PGd9hvTVnYgt047XA/jsfLZbiyhf/5Udd5UWIVeT+tWHuM1b0 AOZ9zMxN5hNye6Jz7EG0JTLeP488rnlY7roTmqeGxcqcrDmaKaQzhulK7gIr1pernnvW TVpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=3XSRH8Wh0iQeCxAKaZkvl0oeEpmt8lK6Mkyjy8nJbLw=; b=jQrKtHDNvNyVFHh8y7bKZknCdXWQ0oxJGskcjFWj9J/xYyOLbT2Zx2vofitC4T+79l M0V/r1QoxxGSsgKKoyf+1iaw4U+A60jvOvgebtALXRi3GoVZnPJO1Ub/be05CMgmrRa0 ombTc8HihJep7eXmFkc3qnSMJWsiQDEuFUofy987l/ibVntgW+mTSgbYz6D6Ek6nH+yN FlEj/xuobLJIDxrjXDE0ziB3Shpc9HjLghPLwHFjJnToK9uKeR1lgfcSnHkFtD8zJxsa jwF+0sWjXmjOTBRESgqUsz3YR1JwA+cb38fGlUJHPgifmYndQt3jqut7PobcyCsD3Gp+ 83+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XCrU1bZO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b2-20020aa7d482000000b0046b0dcc24dasi1586285edr.403.2022.11.30.09.02.57; Wed, 30 Nov 2022 09:03:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XCrU1bZO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230472AbiK3RBl (ORCPT + 99 others); Wed, 30 Nov 2022 12:01:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230480AbiK3RBN (ORCPT ); Wed, 30 Nov 2022 12:01:13 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 458A8880C4 for ; Wed, 30 Nov 2022 08:57:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827474; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3XSRH8Wh0iQeCxAKaZkvl0oeEpmt8lK6Mkyjy8nJbLw=; b=XCrU1bZO2qWRu4zP/IYp+OX3/lLoNA4SmsKSJiE4MTVf2QNa2d+nl25riQN5pabgmhr1bz xncABdnRVz/wPKdfTMUfJAWa3A/GgG0NvFeAkruEzDqzlxGh0igGD2H6pJPPlW33kgvXUv 7G3CaU8LCs8d+krc2ubNADzrWkYl+x0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-398-7p_zKd7nM7u6H87FPvUAcw-1; Wed, 30 Nov 2022 11:57:46 -0500 X-MC-Unique: 7p_zKd7nM7u6H87FPvUAcw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 477F0101CC6B; Wed, 30 Nov 2022 16:57:40 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8713F492B04; Wed, 30 Nov 2022 16:57:39 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 23/35] rxrpc: Implement a mechanism to send an event notification to a call From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:57:36 +0000 Message-ID: <166982745677.621383.7813500115966782402.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941366932846798?= X-GMAIL-MSGID: =?utf-8?q?1750941366932846798?= Provide a means by which an event notification can be sent to a call such that the I/O thread can process it rather than it being done in a separate workqueue. This will allow a lot of locking to be removed. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 52 ++++++++++++++++++++++++++++++++++++++++++ net/rxrpc/ar-internal.h | 5 +++- net/rxrpc/call_object.c | 24 +++++++++++++++++++ net/rxrpc/input.c | 3 +- net/rxrpc/io_thread.c | 20 +++++++++++++++- net/rxrpc/local_object.c | 1 + 6 files changed, 100 insertions(+), 5 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 44a9be9836f9..0b12d96c7921 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -16,6 +16,13 @@ /* * Declare tracing information enums and their string mappings for display. */ +#define rxrpc_call_poke_traces \ + EM(rxrpc_call_poke_error, "Error") \ + EM(rxrpc_call_poke_idle, "Idle") \ + EM(rxrpc_call_poke_start, "Start") \ + EM(rxrpc_call_poke_timer, "Timer") \ + E_(rxrpc_call_poke_timer_now, "Timer-now") + #define rxrpc_skb_traces \ EM(rxrpc_skb_eaten_by_unshare, "ETN unshare ") \ EM(rxrpc_skb_eaten_by_unshare_nomem, "ETN unshar-nm") \ @@ -150,6 +157,7 @@ EM(rxrpc_call_get_input, "GET input ") \ EM(rxrpc_call_get_kernel_service, "GET krnl-srv") \ EM(rxrpc_call_get_notify_socket, "GET notify ") \ + EM(rxrpc_call_get_poke, "GET poke ") \ EM(rxrpc_call_get_recvmsg, "GET recvmsg ") \ EM(rxrpc_call_get_release_sock, "GET rel-sock") \ EM(rxrpc_call_get_sendmsg, "GET sendmsg ") \ @@ -160,6 +168,7 @@ EM(rxrpc_call_put_discard_prealloc, "PUT disc-pre") \ EM(rxrpc_call_put_input, "PUT input ") \ EM(rxrpc_call_put_kernel, "PUT kernel ") \ + EM(rxrpc_call_put_poke, "PUT poke ") \ EM(rxrpc_call_put_recvmsg, "PUT recvmsg ") \ EM(rxrpc_call_put_release_sock, "PUT rls-sock") \ EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \ @@ -378,6 +387,7 @@ #define E_(a, b) a enum rxrpc_bundle_trace { rxrpc_bundle_traces } __mode(byte); +enum rxrpc_call_poke_trace { rxrpc_call_poke_traces } __mode(byte); enum rxrpc_call_trace { rxrpc_call_traces } __mode(byte); enum rxrpc_client_trace { rxrpc_client_traces } __mode(byte); enum rxrpc_congest_change { rxrpc_congest_changes } __mode(byte); @@ -408,6 +418,7 @@ enum rxrpc_txqueue_trace { rxrpc_txqueue_traces } __mode(byte); #define E_(a, b) TRACE_DEFINE_ENUM(a); rxrpc_bundle_traces; +rxrpc_call_poke_traces; rxrpc_call_traces; rxrpc_client_traces; rxrpc_congest_changes; @@ -1747,6 +1758,47 @@ TRACE_EVENT(rxrpc_txbuf, __entry->ref) ); +TRACE_EVENT(rxrpc_poke_call, + TP_PROTO(struct rxrpc_call *call, bool busy, + enum rxrpc_call_poke_trace what), + + TP_ARGS(call, busy, what), + + TP_STRUCT__entry( + __field(unsigned int, call_debug_id ) + __field(bool, busy ) + __field(enum rxrpc_call_poke_trace, what ) + ), + + TP_fast_assign( + __entry->call_debug_id = call->debug_id; + __entry->busy = busy; + __entry->what = what; + ), + + TP_printk("c=%08x %s%s", + __entry->call_debug_id, + __print_symbolic(__entry->what, rxrpc_call_poke_traces), + __entry->busy ? "!" : "") + ); + +TRACE_EVENT(rxrpc_call_poked, + TP_PROTO(struct rxrpc_call *call), + + TP_ARGS(call), + + TP_STRUCT__entry( + __field(unsigned int, call_debug_id ) + ), + + TP_fast_assign( + __entry->call_debug_id = call->debug_id; + ), + + TP_printk("c=%08x", + __entry->call_debug_id) + ); + #undef EM #undef E_ #endif /* _TRACE_RXRPC_H */ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 654e9dab107c..a80655fa9dfb 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -292,6 +292,7 @@ struct rxrpc_local { struct sk_buff_head reject_queue; /* packets awaiting rejection */ struct sk_buff_head event_queue; /* endpoint event packets awaiting processing */ struct sk_buff_head rx_queue; /* Received packets */ + struct list_head call_attend_q; /* Calls requiring immediate attention */ struct rb_root client_bundles; /* Client connection bundles by socket params */ spinlock_t client_bundles_lock; /* Lock for client_bundles */ spinlock_t lock; /* access lock */ @@ -616,6 +617,7 @@ struct rxrpc_call { struct list_head recvmsg_link; /* Link in rx->recvmsg_q */ struct list_head sock_link; /* Link in rx->sock_calls */ struct rb_node sock_node; /* Node in rx->calls */ + struct list_head attend_link; /* Link in local->call_attend_q */ struct rxrpc_txbuf *tx_pending; /* Tx buffer being filled */ wait_queue_head_t waitq; /* Wait queue for channel or Tx */ s64 tx_total_len; /* Total length left to be transmitted (or -1) */ @@ -843,6 +845,7 @@ extern const char *const rxrpc_call_states[]; extern const char *const rxrpc_call_completions[]; extern struct kmem_cache *rxrpc_call_jar; +void rxrpc_poke_call(struct rxrpc_call *call, enum rxrpc_call_poke_trace what); struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *, unsigned long); struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *, gfp_t, unsigned int); struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *, @@ -951,7 +954,7 @@ void rxrpc_unpublish_service_conn(struct rxrpc_connection *); /* * input.c */ -void rxrpc_input_call_packet(struct rxrpc_call *, struct sk_buff *); +void rxrpc_input_call_event(struct rxrpc_call *, struct sk_buff *); void rxrpc_input_implicit_end_call(struct rxrpc_sock *, struct rxrpc_connection *, struct rxrpc_call *); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index f6d1b3a6f8c6..997641e3d1c8 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -45,6 +45,29 @@ static struct semaphore rxrpc_call_limiter = static struct semaphore rxrpc_kernel_call_limiter = __SEMAPHORE_INITIALIZER(rxrpc_kernel_call_limiter, 1000); +void rxrpc_poke_call(struct rxrpc_call *call, enum rxrpc_call_poke_trace what) +{ + struct rxrpc_local *local; + struct rxrpc_peer *peer = call->peer; + bool busy; + + if (WARN_ON_ONCE(!peer)) + return; + local = peer->local; + + if (call->state < RXRPC_CALL_COMPLETE) { + spin_lock_bh(&local->lock); + busy = !list_empty(&call->attend_link); + trace_rxrpc_poke_call(call, busy, what); + if (!busy) { + rxrpc_get_call(call, rxrpc_call_get_poke); + list_add_tail(&call->attend_link, &local->call_attend_q); + } + spin_unlock_bh(&local->lock); + rxrpc_wake_up_io_thread(local); + } +} + static void rxrpc_call_timer_expired(struct timer_list *t) { struct rxrpc_call *call = from_timer(call, t, timer); @@ -137,6 +160,7 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, INIT_LIST_HEAD(&call->accept_link); INIT_LIST_HEAD(&call->recvmsg_link); INIT_LIST_HEAD(&call->sock_link); + INIT_LIST_HEAD(&call->attend_link); INIT_LIST_HEAD(&call->tx_buffer); skb_queue_head_init(&call->recvmsg_queue); skb_queue_head_init(&call->rx_oos_queue); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 13c52145a926..036f02371051 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1017,8 +1017,7 @@ static void rxrpc_input_abort(struct rxrpc_call *call, struct sk_buff *skb) /* * Process an incoming call packet. */ -void rxrpc_input_call_packet(struct rxrpc_call *call, - struct sk_buff *skb) +void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); unsigned long timo; diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 416c6101cf78..cc249bc6b8cd 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -366,7 +366,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) /* Process a call packet; this either discards or passes on the ref * elsewhere. */ - rxrpc_input_call_packet(call, skb); + rxrpc_input_call_event(call, skb); goto out; discard: @@ -413,6 +413,7 @@ int rxrpc_io_thread(void *data) { struct sk_buff_head rx_queue; struct rxrpc_local *local = data; + struct rxrpc_call *call; struct sk_buff *skb; skb_queue_head_init(&rx_queue); @@ -422,6 +423,20 @@ int rxrpc_io_thread(void *data) for (;;) { rxrpc_inc_stat(local->rxnet, stat_io_loop); + /* Deal with calls that want immediate attention. */ + if ((call = list_first_entry_or_null(&local->call_attend_q, + struct rxrpc_call, + attend_link))) { + spin_lock_bh(&local->lock); + list_del_init(&call->attend_link); + spin_unlock_bh(&local->lock); + + trace_rxrpc_call_poked(call); + rxrpc_input_call_event(call, NULL); + rxrpc_put_call(call, rxrpc_call_put_poke); + continue; + } + /* Process received packets and errors. */ if ((skb = __skb_dequeue(&rx_queue))) { switch (skb->mark) { @@ -450,7 +465,8 @@ int rxrpc_io_thread(void *data) } set_current_state(TASK_INTERRUPTIBLE); - if (!skb_queue_empty(&local->rx_queue)) { + if (!skb_queue_empty(&local->rx_queue) || + !list_empty(&local->call_attend_q)) { __set_current_state(TASK_RUNNING); continue; } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 6b4d77219f36..03f491cc23ef 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -104,6 +104,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, skb_queue_head_init(&local->reject_queue); skb_queue_head_init(&local->event_queue); skb_queue_head_init(&local->rx_queue); + INIT_LIST_HEAD(&local->call_attend_q); local->client_bundles = RB_ROOT; spin_lock_init(&local->client_bundles_lock); spin_lock_init(&local->lock); From patchwork Wed Nov 30 16:57:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27900 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1041529wrr; Wed, 30 Nov 2022 09:04:10 -0800 (PST) X-Google-Smtp-Source: AA0mqf52JL8nSVtqVktdp4b/sejdBf9o0pxKYGdaXRu83P4LEgI1huJU0/WHlwNHx7Kl9/1/FfoA X-Received: by 2002:a17:907:9c0a:b0:7ae:1e53:8dd4 with SMTP id ld10-20020a1709079c0a00b007ae1e538dd4mr9548685ejc.42.1669827850173; Wed, 30 Nov 2022 09:04:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827850; cv=none; d=google.com; s=arc-20160816; b=kPZgJWHIi/lz46+3RKgs9Ai29UPLltvKeHa2OotIH2MDKOH/4d8jgfe8b1fphqrRF/ KHtDykzIvQlcOKqiYRRVGfzXmnRxShMqPykMr2dk+U01ObQD7vvQWrwq0Hhy9rqI/G8s sz9Wnu1xjsbJEoi4gdDw+1eyZAD/36gTsGC74dZOvGC0g++Qdc0yW0mmh6NJWHNvewRy T4irn4ZTK5H12BFiYQxMBXMbHpnDG8AOXwTZOkkT47JRvdjrsQT2ueUWk11YHH0SPVsP JfIhxFM2IZVI6ZD6pCG64mlbka7YAsYSrxwAt6cJV8BDQVZenlVe0pGlc/IPkIQb5deE bc6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=z6QQEHUq8aGAElvlTLH4USEK7iWS4UjdJ8ZNjiJjfog=; b=lnTE7LSVh7iTO/4fVlm5Boje9OIWEXt3NtV4+zxl6Rz+l/N4DlHB0sejtDpEL7G+dZ iaLexvEZ8kTsuJ2rrFBSsQ0jPiv7nsOzta5VN85HteOsXw3hG21e+jpRZKUXLPmkm6Xx QPlj/S4rxyCQI0wIDaKI32hfsRt9RyIUvfBHcK/SsastNEOG63iSoo+11t5n6hMVcxVV xEJyMniOvlkAbvPwN+VJ7Nu5rwSCf5Lf3Y28OxpTNmo/CwyHgnHicdYr4cNeTdYAFh+m yugS3YU+rRa4NRK97Tqk6B64LJEfwPzT3Hjc3yh9dQCTetG2whNieLSBej4u/3kmDLvM Wmrg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Xvy6RNml; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hr30-20020a1709073f9e00b007b863be7bb1si2003742ejc.531.2022.11.30.09.03.40; Wed, 30 Nov 2022 09:04:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Xvy6RNml; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229714AbiK3RCK (ORCPT + 99 others); Wed, 30 Nov 2022 12:02:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230504AbiK3RBS (ORCPT ); Wed, 30 Nov 2022 12:01:18 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEA102AE0A for ; Wed, 30 Nov 2022 08:57:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827472; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z6QQEHUq8aGAElvlTLH4USEK7iWS4UjdJ8ZNjiJjfog=; b=Xvy6RNmlnW9JO0bCXM89xZc+OV6R7baZNWR8TBVaNOBdjsjms6iMdfZUAha04b549oO2+H w25li/LAGclrhFp6L9peR6SZNag25U34bjRUpekfslhTxOyR+6EmV9aassznG8HxIMq3tz vb0RAKq95MvI3Ygd+KHU8sa9fHzjg+s= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-483-ofSnyi-7NnifSmxvD-zisA-1; Wed, 30 Nov 2022 11:57:49 -0500 X-MC-Unique: ofSnyi-7NnifSmxvD-zisA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0CD5F185A7AB; Wed, 30 Nov 2022 16:57:49 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 345712166B26; Wed, 30 Nov 2022 16:57:48 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 24/35] rxrpc: Copy client call parameters into rxrpc_call earlier From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:57:45 +0000 Message-ID: <166982746544.621383.4933729446413150137.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941407457360034?= X-GMAIL-MSGID: =?utf-8?q?1750941407457360034?= Copy client call parameters into rxrpc_call earlier so that that can be used to convey them to the connection code - which can then be offloaded to the I/O thread. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 3 +++ net/rxrpc/ar-internal.h | 7 +++++- net/rxrpc/call_accept.c | 2 ++ net/rxrpc/call_object.c | 50 ++++++++++++++++++++++++++---------------- net/rxrpc/conn_client.c | 2 +- net/rxrpc/io_thread.c | 4 ++- net/rxrpc/output.c | 2 +- net/rxrpc/proc.c | 25 ++++++--------------- net/rxrpc/recvmsg.c | 8 +++---- net/rxrpc/security.c | 30 +++++++++++++++++++++++++ net/rxrpc/txbuf.c | 2 +- 11 files changed, 87 insertions(+), 48 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 0b12d96c7921..8bd48358f757 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -52,6 +52,7 @@ #define rxrpc_local_traces \ EM(rxrpc_local_free, "FREE ") \ + EM(rxrpc_local_get_call, "GET call ") \ EM(rxrpc_local_get_client_conn, "GET conn-cln") \ EM(rxrpc_local_get_for_use, "GET for-use ") \ EM(rxrpc_local_get_peer, "GET peer ") \ @@ -61,6 +62,7 @@ EM(rxrpc_local_processing, "PROCESSING ") \ EM(rxrpc_local_put_already_queued, "PUT alreadyq") \ EM(rxrpc_local_put_bind, "PUT bind ") \ + EM(rxrpc_local_put_call, "PUT call ") \ EM(rxrpc_local_put_for_use, "PUT for-use ") \ EM(rxrpc_local_put_kill_conn, "PUT conn-kil") \ EM(rxrpc_local_put_peer, "PUT peer ") \ @@ -166,6 +168,7 @@ EM(rxrpc_call_new_client, "NEW client ") \ EM(rxrpc_call_new_prealloc_service, "NEW prealloc") \ EM(rxrpc_call_put_discard_prealloc, "PUT disc-pre") \ + EM(rxrpc_call_put_discard_error, "PUT disc-err") \ EM(rxrpc_call_put_input, "PUT input ") \ EM(rxrpc_call_put_kernel, "PUT kernel ") \ EM(rxrpc_call_put_poke, "PUT poke ") \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index a80655fa9dfb..3bd6a5eb2fb7 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -530,6 +530,7 @@ enum rxrpc_call_flag { RXRPC_CALL_UPGRADE, /* Service upgrade was requested for the call */ RXRPC_CALL_DELAY_ACK_PENDING, /* DELAY ACK generation is pending */ RXRPC_CALL_IDLE_ACK_PENDING, /* IDLE ACK generation is pending */ + RXRPC_CALL_EXCLUSIVE, /* The call uses a once-only connection */ }; /* @@ -592,10 +593,13 @@ struct rxrpc_call { struct rcu_head rcu; struct rxrpc_connection *conn; /* connection carrying call */ struct rxrpc_peer *peer; /* Peer record for remote address */ + struct rxrpc_local *local; /* Representation of local endpoint */ struct rxrpc_sock __rcu *socket; /* socket responsible */ struct rxrpc_net *rxnet; /* Network namespace to which call belongs */ + struct key *key; /* Security details */ const struct rxrpc_security *security; /* applied security module */ struct mutex user_mutex; /* User access mutex */ + struct sockaddr_rxrpc dest_srx; /* Destination address */ unsigned long delay_ack_at; /* When DELAY ACK needs to happen */ unsigned long ack_lost_at; /* When ACK is figured as lost */ unsigned long resend_at; /* When next resend needs to happen */ @@ -631,11 +635,11 @@ struct rxrpc_call { enum rxrpc_call_state state; /* current state of call */ enum rxrpc_call_completion completion; /* Call completion condition */ refcount_t ref; - u16 service_id; /* service ID */ u8 security_ix; /* Security type */ enum rxrpc_interruptibility interruptibility; /* At what point call may be interrupted */ u32 call_id; /* call ID on connection */ u32 cid; /* connection ID plus channel index */ + u32 security_level; /* Security level selected */ int debug_id; /* debug ID for printks */ unsigned short rx_pkt_offset; /* Current recvmsg packet offset */ unsigned short rx_pkt_len; /* Current recvmsg packet len */ @@ -1147,6 +1151,7 @@ extern const struct rxrpc_security rxkad; int __init rxrpc_init_security(void); const struct rxrpc_security *rxrpc_security_lookup(u8); void rxrpc_exit_security(void); +int rxrpc_init_client_call_security(struct rxrpc_call *); int rxrpc_init_client_conn_security(struct rxrpc_connection *); const struct rxrpc_security *rxrpc_get_incoming_security(struct rxrpc_sock *, struct sk_buff *); diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 8d106b626aa3..8bc327aa2beb 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -318,10 +318,12 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, (call_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); rxrpc_see_call(call, rxrpc_call_see_accept); + call->local = rxrpc_get_local(conn->local, rxrpc_local_get_call); call->conn = conn; call->security = conn->security; call->security_ix = conn->security_ix; call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_accept); + call->dest_srx = peer->srx; call->cong_ssthresh = call->peer->cong_ssthresh; call->tx_last_sent = ktime_get_real(); return call; diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 997641e3d1c8..2622d06bb0d6 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -47,14 +47,9 @@ static struct semaphore rxrpc_kernel_call_limiter = void rxrpc_poke_call(struct rxrpc_call *call, enum rxrpc_call_poke_trace what) { - struct rxrpc_local *local; - struct rxrpc_peer *peer = call->peer; + struct rxrpc_local *local = call->local; bool busy; - if (WARN_ON_ONCE(!peer)) - return; - local = peer->local; - if (call->state < RXRPC_CALL_COMPLETE) { spin_lock_bh(&local->lock); busy = !list_empty(&call->attend_link); @@ -200,22 +195,45 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, */ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, struct sockaddr_rxrpc *srx, + struct rxrpc_conn_parameters *cp, + struct rxrpc_call_params *p, gfp_t gfp, unsigned int debug_id) { struct rxrpc_call *call; ktime_t now; + int ret; _enter(""); call = rxrpc_alloc_call(rx, gfp, debug_id); if (!call) return ERR_PTR(-ENOMEM); - call->state = RXRPC_CALL_CLIENT_AWAIT_CONN; - call->service_id = srx->srx_service; now = ktime_get_real(); - call->acks_latest_ts = now; - call->cong_tstamp = now; + call->acks_latest_ts = now; + call->cong_tstamp = now; + call->state = RXRPC_CALL_CLIENT_AWAIT_CONN; + call->dest_srx = *srx; + call->interruptibility = p->interruptibility; + call->tx_total_len = p->tx_total_len; + call->key = key_get(cp->key); + call->local = rxrpc_get_local(cp->local, rxrpc_local_get_call); + if (p->kernel) + __set_bit(RXRPC_CALL_KERNEL, &call->flags); + if (cp->upgrade) + __set_bit(RXRPC_CALL_UPGRADE, &call->flags); + if (cp->exclusive) + __set_bit(RXRPC_CALL_EXCLUSIVE, &call->flags); + + ret = rxrpc_init_client_call_security(call); + if (ret < 0) { + __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 0, ret); + rxrpc_put_call(call, rxrpc_call_put_discard_error); + return ERR_PTR(ret); + } + + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), + p->user_call_ID, rxrpc_call_new_client); _leave(" = %p", call); return call; @@ -295,7 +313,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, return ERR_PTR(-ERESTARTSYS); } - call = rxrpc_alloc_client_call(rx, srx, gfp, debug_id); + call = rxrpc_alloc_client_call(rx, srx, cp, p, gfp, debug_id); if (IS_ERR(call)) { release_sock(&rx->sk); up(limiter); @@ -303,13 +321,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, return call; } - call->interruptibility = p->interruptibility; - call->tx_total_len = p->tx_total_len; - trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), - p->user_call_ID, rxrpc_call_new_client); - if (p->kernel) - __set_bit(RXRPC_CALL_KERNEL, &call->flags); - /* We need to protect a partially set up call against the user as we * will be acting outside the socket lock. */ @@ -413,7 +424,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, rcu_assign_pointer(call->socket, rx); call->call_id = sp->hdr.callNumber; - call->service_id = sp->hdr.serviceId; + call->dest_srx.srx_service = sp->hdr.serviceId; call->cid = sp->hdr.cid; call->state = RXRPC_CALL_SERVER_SECURING; call->cong_tstamp = skb->tstamp; @@ -639,6 +650,7 @@ static void rxrpc_destroy_call(struct work_struct *work) rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_put_ack); rxrpc_put_connection(call->conn, rxrpc_conn_put_call); rxrpc_put_peer(call->peer, rxrpc_peer_put_call); + rxrpc_put_local(call->local, rxrpc_local_put_call); call_rcu(&call->rcu, rxrpc_rcu_free_call); } diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 9485a3d18f29..ab3dd22fadc0 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -553,7 +553,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, call->call_id = call_id; call->security = conn->security; call->security_ix = conn->security_ix; - call->service_id = conn->service_id; + call->dest_srx.srx_service = conn->service_id; trace_rxrpc_connect_call(call); diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index cc249bc6b8cd..2119941b6d6c 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -343,8 +343,8 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) } if (call) { - if (sp->hdr.serviceId != call->service_id) - call->service_id = sp->hdr.serviceId; + if (sp->hdr.serviceId != call->dest_srx.srx_service) + call->dest_srx.srx_service = sp->hdr.serviceId; if ((int)sp->hdr.serial - (int)call->rx_serial > 0) call->rx_serial = sp->hdr.serial; if (!test_bit(RXRPC_CALL_RX_HEARD, &call->flags)) diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 131c7a76fb06..e2ce7dadbb7a 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -357,7 +357,7 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call) pkt.whdr.userStatus = 0; pkt.whdr.securityIndex = call->security_ix; pkt.whdr._rsvd = 0; - pkt.whdr.serviceId = htons(call->service_id); + pkt.whdr.serviceId = htons(call->dest_srx.srx_service); pkt.abort_code = htonl(call->abort_code); iov[0].iov_base = &pkt; diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index 35d5b43c677e..5af7c8ee4b1a 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -49,8 +49,6 @@ static void rxrpc_call_seq_stop(struct seq_file *seq, void *v) static int rxrpc_call_seq_show(struct seq_file *seq, void *v) { struct rxrpc_local *local; - struct rxrpc_sock *rx; - struct rxrpc_peer *peer; struct rxrpc_call *call; struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); unsigned long timeout = 0; @@ -69,22 +67,13 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) call = list_entry(v, struct rxrpc_call, link); - rx = rcu_dereference(call->socket); - if (rx) { - local = READ_ONCE(rx->local); - if (local) - sprintf(lbuff, "%pISpc", &local->srx.transport); - else - strcpy(lbuff, "no_local"); - } else { - strcpy(lbuff, "no_socket"); - } - - peer = call->peer; - if (peer) - sprintf(rbuff, "%pISpc", &peer->srx.transport); + local = call->local; + if (local) + sprintf(lbuff, "%pISpc", &local->srx.transport); else - strcpy(rbuff, "no_connection"); + strcpy(lbuff, "no_local"); + + sprintf(rbuff, "%pISpc", &call->dest_srx.transport); if (call->state != RXRPC_CALL_SERVER_PREALLOC) { timeout = READ_ONCE(call->expect_rx_by); @@ -98,7 +87,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) " %-8.8s %08x %08x %08x %02x %08x %02x %08x %06lx\n", lbuff, rbuff, - call->service_id, + call->dest_srx.srx_service, call->cid, call->call_id, rxrpc_is_service_call(call) ? "Svc" : "Clt", diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index bfac9e09347e..5df7f468abed 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -490,11 +490,9 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, } if (msg->msg_name && call->peer) { - struct sockaddr_rxrpc *srx = msg->msg_name; - size_t len = sizeof(call->peer->srx); + size_t len = sizeof(call->dest_srx); - memcpy(msg->msg_name, &call->peer->srx, len); - srx->srx_service = call->service_id; + memcpy(msg->msg_name, &call->dest_srx, len); msg->msg_namelen = len; } @@ -639,7 +637,7 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call, out: rxrpc_transmit_ack_packets(call->peer->local); if (_service) - *_service = call->service_id; + *_service = call->dest_srx.srx_service; mutex_unlock(&call->user_mutex); _leave(" = %d [%zu,%d]", ret, iov_iter_count(iter), *_abort); return ret; diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c index e6ddac9b3732..209f2c25a0da 100644 --- a/net/rxrpc/security.c +++ b/net/rxrpc/security.c @@ -62,6 +62,36 @@ const struct rxrpc_security *rxrpc_security_lookup(u8 security_index) return rxrpc_security_types[security_index]; } +/* + * Initialise the security on a client call. + */ +int rxrpc_init_client_call_security(struct rxrpc_call *call) +{ + const struct rxrpc_security *sec; + struct rxrpc_key_token *token; + struct key *key = call->key; + int ret; + + if (!key) + return 0; + + ret = key_validate(key); + if (ret < 0) + return ret; + + for (token = key->payload.data[0]; token; token = token->next) { + sec = rxrpc_security_lookup(token->security_index); + if (sec) + goto found; + } + return -EKEYREJECTED; + +found: + call->security = sec; + _leave(" = 0"); + return 0; +} + /* * initialise the security on a client connection */ diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index f93dc666a3a0..90ff00c340cd 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -44,7 +44,7 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->wire.userStatus = 0; txb->wire.securityIndex = call->security_ix; txb->wire._rsvd = 0; - txb->wire.serviceId = htons(call->service_id); + txb->wire.serviceId = htons(call->dest_srx.srx_service); trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 1, From patchwork Wed Nov 30 16:57:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27898 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1041256wrr; Wed, 30 Nov 2022 09:03:46 -0800 (PST) X-Google-Smtp-Source: AA0mqf5KBxVSIGmB7xzC9p2/s0dXV0blHeAmV0mWpknDNsnECZqpGqsYPxU/dCNpHe1blZ12EUfq X-Received: by 2002:a17:906:914f:b0:7bc:4d3e:66dd with SMTP id y15-20020a170906914f00b007bc4d3e66ddmr24117052ejw.624.1669827826778; Wed, 30 Nov 2022 09:03:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827826; cv=none; d=google.com; s=arc-20160816; b=Stw+RwvGPq8KF8qERLkKkPCbudj0S1o1Pn7q0MuT5hvdg5RIN0DtkiUH5qQLYjzq96 nQXcVUXbSa4886Vqxh3jjyiv7szLDPCoSDipxLEU4TmUSVMbCEyQNz455Z5K4JLbKKDj b4sy2VG2YleP5YjrlSUCmc3+bUVMaCEYz/Fi4hptdNP9Cn7Ey/LnvOPR/6eOOOMBUotk dH7lzfDBLwSGD+0vQTL0EY5rJRq24P8WfBNj/e1Bkzcr7Izd0TrFP9a/dMs8HNpe/wQW NbRneSn3/uVxl9dcgMWuXP/CuB78lz98Ovlw++Ft+jJXwcSwKG0y50hIgqttKvtnCMYo QW8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=WI7RptmKeFtmISMnYlXKv5559f524IZhWCh9BQvF10Q=; b=sMILH0qSxWqCU/5rlZLvsQyLjevmHH0rT0PYpZFTtmzFKTg3b80vC229LJvtlByc8f HwppsLdb7jRwlK6AyLNPIRZCvci/DKpo2HihJNs08nia4nLLsrQiOiq0W6GCP0HDWUy4 +Ve0biaC1Ketovq5h+t3aYXpMFjxZb1wpwN5az3uWp9zhJlKTjwjNM7bbjXp6hN4O4GY nfCs+onIdkwDznsNp2H2NnBEq3rCazsT8u0VmVr9NMFljHGYhp2OpXzTyJ3P29kZiRWz fQuTsdguhlwlqNwzqqVxw2bcl6hSgzR83MANuwFuQZDXlWQHMmgdvzzZImuhbQw+q4kC L63w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RbyT5UZN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cr16-20020a170906d55000b00730aa841c5bsi1790474ejc.964.2022.11.30.09.03.22; Wed, 30 Nov 2022 09:03:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RbyT5UZN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229977AbiK3RCN (ORCPT + 99 others); Wed, 30 Nov 2022 12:02:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230463AbiK3RBj (ORCPT ); Wed, 30 Nov 2022 12:01:39 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FFFA94928 for ; Wed, 30 Nov 2022 08:58:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827480; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WI7RptmKeFtmISMnYlXKv5559f524IZhWCh9BQvF10Q=; b=RbyT5UZNsnl9oSf5EHrCWn3hYputePV7Ye6HVkdtWO12O1KVQJORlwakWc5feuUimcR+Cy 6fD6ZmwgsD11EAs1W1CBwn8/yWu6vgVciuDj7PIxWopitPdAJrsdtwwlO4DRfImzDFJf4G 96i6QLwyx9KyIuPufMZXftd8DkRJXUs= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-671-8_fWySZ2Nm6KRNYwRAxwYQ-1; Wed, 30 Nov 2022 11:57:58 -0500 X-MC-Unique: 8_fWySZ2Nm6KRNYwRAxwYQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B99A11C08961; Wed, 30 Nov 2022 16:57:57 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id DF1D32166B26; Wed, 30 Nov 2022 16:57:56 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 25/35] rxrpc: Move DATA transmission into call processor work item From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:57:54 +0000 Message-ID: <166982747419.621383.2996927769382062225.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941383661780094?= X-GMAIL-MSGID: =?utf-8?q?1750941383661780094?= Move DATA transmission into the call processor work item. In a future patch, this will be called from the I/O thread rather than being itsown work item. This will allow DATA transmission to be driven directly by incoming ACKs, pokes and timers as those are processed. The Tx queue is also split: The queue of packets prepared by sendmsg is now places in call->tx_sendmsg and the packet dispatcher decants the packets into call->tx_buffer as space becomes available in the transmission window. This allows sendmsg to run ahead of the available space to try and prevent an underflow in transmission. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 6 +++ net/rxrpc/ar-internal.h | 5 ++- net/rxrpc/call_event.c | 83 +++++++++++++++++++++++++++++++++++++++--- net/rxrpc/call_object.c | 6 +++ net/rxrpc/output.c | 48 ++++++++++++++++++++++++ net/rxrpc/sendmsg.c | 83 +++++++----------------------------------- net/rxrpc/txbuf.c | 10 ++++- 7 files changed, 161 insertions(+), 80 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 8bd48358f757..c3043fbea0e6 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -183,6 +183,7 @@ EM(rxrpc_call_queue_requeue, "QUE requeue ") \ EM(rxrpc_call_queue_resend, "QUE resend ") \ EM(rxrpc_call_queue_timer, "QUE timer ") \ + EM(rxrpc_call_queue_tx_data, "QUE tx-data ") \ EM(rxrpc_call_see_accept, "SEE accept ") \ EM(rxrpc_call_see_activate_client, "SEE act-clnt") \ EM(rxrpc_call_see_connect_failed, "SEE con-fail") \ @@ -738,6 +739,7 @@ TRACE_EVENT(rxrpc_txqueue, __field(rxrpc_seq_t, acks_hard_ack ) __field(rxrpc_seq_t, tx_bottom ) __field(rxrpc_seq_t, tx_top ) + __field(rxrpc_seq_t, tx_prepared ) __field(int, tx_winsize ) ), @@ -747,16 +749,18 @@ TRACE_EVENT(rxrpc_txqueue, __entry->acks_hard_ack = call->acks_hard_ack; __entry->tx_bottom = call->tx_bottom; __entry->tx_top = call->tx_top; + __entry->tx_prepared = call->tx_prepared; __entry->tx_winsize = call->tx_winsize; ), - TP_printk("c=%08x %s f=%08x h=%08x n=%u/%u/%u", + TP_printk("c=%08x %s f=%08x h=%08x n=%u/%u/%u/%u", __entry->call, __print_symbolic(__entry->why, rxrpc_txqueue_traces), __entry->tx_bottom, __entry->acks_hard_ack, __entry->tx_top - __entry->tx_bottom, __entry->tx_top - __entry->acks_hard_ack, + __entry->tx_prepared - __entry->tx_bottom, __entry->tx_winsize) ); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 3bd6a5eb2fb7..6af7298af39b 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -646,9 +646,11 @@ struct rxrpc_call { /* Transmitted data tracking. */ spinlock_t tx_lock; /* Transmit queue lock */ + struct list_head tx_sendmsg; /* Sendmsg prepared packets */ struct list_head tx_buffer; /* Buffer of transmissible packets */ rxrpc_seq_t tx_bottom; /* First packet in buffer */ rxrpc_seq_t tx_transmitted; /* Highest packet transmitted */ + rxrpc_seq_t tx_prepared; /* Highest Tx slot prepared. */ rxrpc_seq_t tx_top; /* Highest Tx slot allocated. */ u16 tx_backoff; /* Delay to insert due to Tx failure */ u8 tx_winsize; /* Maximum size of Tx window */ @@ -766,7 +768,7 @@ struct rxrpc_send_params { */ struct rxrpc_txbuf { struct rcu_head rcu; - struct list_head call_link; /* Link in call->tx_queue */ + struct list_head call_link; /* Link in call->tx_sendmsg/tx_buffer */ struct list_head tx_link; /* Link in live Enc queue or Tx queue */ struct rxrpc_call *call; /* Call to which belongs */ ktime_t last_sent; /* Time at which last transmitted */ @@ -1067,6 +1069,7 @@ int rxrpc_send_abort_packet(struct rxrpc_call *); int rxrpc_send_data_packet(struct rxrpc_call *, struct rxrpc_txbuf *); void rxrpc_reject_packets(struct rxrpc_local *); void rxrpc_send_keepalive(struct rxrpc_peer *); +void rxrpc_transmit_one(struct rxrpc_call *call, struct rxrpc_txbuf *txb); /* * peer_event.c diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 3925b55e2064..c9f835292f7b 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -291,6 +291,72 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) _leave(""); } +static bool rxrpc_tx_window_has_space(struct rxrpc_call *call) +{ + unsigned int winsize = min_t(unsigned int, call->tx_winsize, + call->cong_cwnd + call->cong_extra); + rxrpc_seq_t window = call->acks_hard_ack, wtop = window + winsize; + rxrpc_seq_t tx_top = call->tx_top; + int space; + + space = wtop - tx_top; + return space > 0; +} + +/* + * Decant some if the sendmsg prepared queue into the transmission buffer. + */ +static void rxrpc_decant_prepared_tx(struct rxrpc_call *call) +{ + struct rxrpc_txbuf *txb; + + if (rxrpc_is_client_call(call) && + !test_bit(RXRPC_CALL_EXPOSED, &call->flags)) + rxrpc_expose_client_call(call); + + while ((txb = list_first_entry_or_null(&call->tx_sendmsg, + struct rxrpc_txbuf, call_link))) { + spin_lock(&call->tx_lock); + list_del(&txb->call_link); + spin_unlock(&call->tx_lock); + + call->tx_top = txb->seq; + list_add_tail(&txb->call_link, &call->tx_buffer); + + rxrpc_transmit_one(call, txb); + + // TODO: Drain the transmission buffers. Do this somewhere better + if (after(call->acks_hard_ack, call->tx_bottom + 16)) + rxrpc_shrink_call_tx_buffer(call); + + if (!rxrpc_tx_window_has_space(call)) + break; + } +} + +static void rxrpc_transmit_some_data(struct rxrpc_call *call) +{ + switch (call->state) { + case RXRPC_CALL_SERVER_ACK_REQUEST: + if (list_empty(&call->tx_sendmsg)) + return; + fallthrough; + + case RXRPC_CALL_SERVER_SEND_REPLY: + case RXRPC_CALL_SERVER_AWAIT_ACK: + case RXRPC_CALL_CLIENT_SEND_REQUEST: + case RXRPC_CALL_CLIENT_AWAIT_REPLY: + if (!rxrpc_tx_window_has_space(call)) + return; + if (list_empty(&call->tx_sendmsg)) + return; + rxrpc_decant_prepared_tx(call); + break; + default: + return; + } +} + /* * Handle retransmission and deferred ACK/abort generation. */ @@ -309,19 +375,22 @@ void rxrpc_process_call(struct work_struct *work) call->debug_id, rxrpc_call_states[call->state], call->events); recheck_state: + if (call->acks_hard_ack != call->tx_bottom) + rxrpc_shrink_call_tx_buffer(call); + /* Limit the number of times we do this before returning to the manager */ - iterations++; - if (iterations > 5) - goto requeue; + if (!rxrpc_tx_window_has_space(call) || + list_empty(&call->tx_sendmsg)) { + iterations++; + if (iterations > 5) + goto requeue; + } if (test_and_clear_bit(RXRPC_CALL_EV_ABORT, &call->events)) { rxrpc_send_abort_packet(call); goto recheck_state; } - if (READ_ONCE(call->acks_hard_ack) != call->tx_bottom) - rxrpc_shrink_call_tx_buffer(call); - if (call->state == RXRPC_CALL_COMPLETE) { del_timer_sync(&call->timer); goto out; @@ -387,6 +456,8 @@ void rxrpc_process_call(struct work_struct *work) set_bit(RXRPC_CALL_EV_RESEND, &call->events); } + rxrpc_transmit_some_data(call); + /* Process events */ if (test_and_clear_bit(RXRPC_CALL_EV_EXPIRED, &call->events)) { if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) && diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 2622d06bb0d6..96a7edd3a842 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -156,6 +156,7 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, INIT_LIST_HEAD(&call->recvmsg_link); INIT_LIST_HEAD(&call->sock_link); INIT_LIST_HEAD(&call->attend_link); + INIT_LIST_HEAD(&call->tx_sendmsg); INIT_LIST_HEAD(&call->tx_buffer); skb_queue_head_init(&call->recvmsg_queue); skb_queue_head_init(&call->rx_oos_queue); @@ -641,6 +642,11 @@ static void rxrpc_destroy_call(struct work_struct *work) del_timer_sync(&call->timer); rxrpc_cleanup_ring(call); + while ((txb = list_first_entry_or_null(&call->tx_sendmsg, + struct rxrpc_txbuf, call_link))) { + list_del(&txb->call_link); + rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned); + } while ((txb = list_first_entry_or_null(&call->tx_buffer, struct rxrpc_txbuf, call_link))) { list_del(&txb->call_link); diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index e2ce7dadbb7a..c8147e50060b 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -465,6 +465,14 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) trace_rxrpc_tx_data(call, txb->seq, serial, txb->wire.flags, test_bit(RXRPC_TXBUF_RESENT, &txb->flags), false); + + /* Track what we've attempted to transmit at least once so that the + * retransmission algorithm doesn't try to resend what we haven't sent + * yet. However, this can race as we can receive an ACK before we get + * to this point. But, OTOH, if we won't get an ACK mentioning this + * packet unless the far side received it (though it could have + * discarded it anyway and NAK'd it). + */ cmpxchg(&call->tx_transmitted, txb->seq - 1, txb->seq); /* send the packet with the don't fragment bit set if we currently @@ -712,3 +720,43 @@ void rxrpc_send_keepalive(struct rxrpc_peer *peer) peer->last_tx_at = ktime_get_seconds(); _leave(""); } + +/* + * Schedule an instant Tx resend. + */ +static inline void rxrpc_instant_resend(struct rxrpc_call *call, + struct rxrpc_txbuf *txb) +{ + if (call->state < RXRPC_CALL_COMPLETE) + kdebug("resend"); +} + +/* + * Transmit one packet. + */ +void rxrpc_transmit_one(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +{ + int ret; + + ret = rxrpc_send_data_packet(call, txb); + if (ret < 0) { + switch (ret) { + case -ENETUNREACH: + case -EHOSTUNREACH: + case -ECONNREFUSED: + rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, + 0, ret); + break; + default: + _debug("need instant resend %d", ret); + rxrpc_instant_resend(call, txb); + } + } else { + unsigned long now = jiffies; + unsigned long resend_at = now + call->peer->rto_j; + + WRITE_ONCE(call->resend_at, resend_at); + rxrpc_reduce_call_timer(call, resend_at, now, + rxrpc_timer_set_for_send); + } +} diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 76b1e2e89c1e..11af37275d5b 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -22,30 +22,9 @@ */ static bool rxrpc_check_tx_space(struct rxrpc_call *call, rxrpc_seq_t *_tx_win) { - unsigned int win_size; - rxrpc_seq_t tx_win = smp_load_acquire(&call->acks_hard_ack); - - /* If we haven't transmitted anything for >1RTT, we should reset the - * congestion management state. - */ - if (ktime_before(ktime_add_us(call->tx_last_sent, - call->peer->srtt_us >> 3), - ktime_get_real())) { - if (RXRPC_TX_SMSS > 2190) - win_size = 2; - else if (RXRPC_TX_SMSS > 1095) - win_size = 3; - else - win_size = 4; - win_size += call->cong_extra; - } else { - win_size = min_t(unsigned int, call->tx_winsize, - call->cong_cwnd + call->cong_extra); - } - if (_tx_win) - *_tx_win = tx_win; - return call->tx_top - tx_win < win_size; + *_tx_win = call->tx_bottom; + return call->tx_prepared - call->tx_bottom < 256; } /* @@ -66,11 +45,6 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx, if (signal_pending(current)) return sock_intr_errno(*timeo); - if (READ_ONCE(call->acks_hard_ack) != call->tx_bottom) { - rxrpc_shrink_call_tx_buffer(call); - continue; - } - trace_rxrpc_txqueue(call, rxrpc_txqueue_wait); *timeo = schedule_timeout(*timeo); } @@ -107,11 +81,6 @@ static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx, tx_win == tx_start && signal_pending(current)) return -EINTR; - if (READ_ONCE(call->acks_hard_ack) != call->tx_bottom) { - rxrpc_shrink_call_tx_buffer(call); - continue; - } - if (tx_win != tx_start) { timeout = rtt; tx_start = tx_win; @@ -137,11 +106,6 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, if (call->state >= RXRPC_CALL_COMPLETE) return call->error; - if (READ_ONCE(call->acks_hard_ack) != call->tx_bottom) { - rxrpc_shrink_call_tx_buffer(call); - continue; - } - trace_rxrpc_txqueue(call, rxrpc_txqueue_wait); *timeo = schedule_timeout(*timeo); } @@ -207,29 +171,27 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, unsigned long now; rxrpc_seq_t seq = txb->seq; bool last = test_bit(RXRPC_TXBUF_LAST, &txb->flags); - int ret; rxrpc_inc_stat(call->rxnet, stat_tx_data); - ASSERTCMP(seq, ==, call->tx_top + 1); + ASSERTCMP(txb->seq, ==, call->tx_prepared + 1); /* We have to set the timestamp before queueing as the retransmit * algorithm can see the packet as soon as we queue it. */ txb->last_sent = ktime_get_real(); - /* Add the packet to the call's output buffer */ - rxrpc_get_txbuf(txb, rxrpc_txbuf_get_buffer); - spin_lock(&call->tx_lock); - list_add_tail(&txb->call_link, &call->tx_buffer); - call->tx_top = seq; - spin_unlock(&call->tx_lock); - if (last) trace_rxrpc_txqueue(call, rxrpc_txqueue_queue_last); else trace_rxrpc_txqueue(call, rxrpc_txqueue_queue); + /* Add the packet to the call's output buffer */ + spin_lock(&call->tx_lock); + list_add_tail(&txb->call_link, &call->tx_sendmsg); + call->tx_prepared = seq; + spin_unlock(&call->tx_lock); + if (last || call->state == RXRPC_CALL_SERVER_ACK_REQUEST) { _debug("________awaiting reply/ACK__________"); write_lock_bh(&call->state_lock); @@ -258,30 +220,11 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, write_unlock_bh(&call->state_lock); } - if (seq == 1 && rxrpc_is_client_call(call)) - rxrpc_expose_client_call(call); - - ret = rxrpc_send_data_packet(call, txb); - if (ret < 0) { - switch (ret) { - case -ENETUNREACH: - case -EHOSTUNREACH: - case -ECONNREFUSED: - rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, - 0, ret); - goto out; - } - } else { - unsigned long now = jiffies; - unsigned long resend_at = now + call->peer->rto_j; - WRITE_ONCE(call->resend_at, resend_at); - rxrpc_reduce_call_timer(call, resend_at, now, - rxrpc_timer_set_for_send); - } - -out: - rxrpc_put_txbuf(txb, rxrpc_txbuf_put_trans); + /* Stick the packet on the crypto queue or the transmission queue as + * appropriate. + */ + rxrpc_queue_call(call, rxrpc_call_queue_tx_data); } /* diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index 90ff00c340cd..a5054389dfbb 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -34,7 +34,7 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->offset = 0; txb->flags = 0; txb->ack_why = 0; - txb->seq = call->tx_top + 1; + txb->seq = call->tx_prepared + 1; txb->wire.epoch = htonl(call->conn->proto.epoch); txb->wire.cid = htonl(call->cid); txb->wire.callNumber = htonl(call->call_id); @@ -107,6 +107,7 @@ void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *call) { struct rxrpc_txbuf *txb; rxrpc_seq_t hard_ack = smp_load_acquire(&call->acks_hard_ack); + bool wake = false; _enter("%x/%x/%x", call->tx_bottom, call->acks_hard_ack, call->tx_top); @@ -123,7 +124,7 @@ void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *call) if (txb->seq != call->tx_bottom + 1) rxrpc_see_txbuf(txb, rxrpc_txbuf_see_out_of_step); ASSERTCMP(txb->seq, ==, call->tx_bottom + 1); - call->tx_bottom++; + smp_store_release(&call->tx_bottom, call->tx_bottom + 1); list_del_rcu(&txb->call_link); trace_rxrpc_txqueue(call, rxrpc_txqueue_dequeue); @@ -131,7 +132,12 @@ void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *call) spin_unlock(&call->tx_lock); rxrpc_put_txbuf(txb, rxrpc_txbuf_put_rotated); + if (after(call->acks_hard_ack, call->tx_bottom + 128)) + wake = true; } spin_unlock(&call->tx_lock); + + if (wake) + wake_up(&call->waitq); } From patchwork Wed Nov 30 16:58:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27909 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042768wrr; Wed, 30 Nov 2022 09:05:57 -0800 (PST) X-Google-Smtp-Source: AA0mqf5dHWrS3o0HZR0J+ivl9eB1wRhgebHA8TkgS8p3AMLpYBxiUD6i6Cw6WTdKge1/9gMNbaXB X-Received: by 2002:a05:6402:370d:b0:462:1a67:75ef with SMTP id ek13-20020a056402370d00b004621a6775efmr39596740edb.16.1669827957206; Wed, 30 Nov 2022 09:05:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827957; cv=none; d=google.com; s=arc-20160816; b=HGDMc3gBD1mOBoLfCZNdp+H3Bq1RjpUUxvKFBSr6yCB6tmgaau90eu91JlBOS88IRV w4FupEC9lM6xJryC7I33Tz4k5KPZx0ebkr0MKNIoohgFD6t8o3kUcChqxw0IjuXYR3UI mdAcSAw5+6RFq4aRVMgQUlkjttjY77u+K0+q9AY/CaU1gegqNDINrRaML4D+HqyCk4i0 EdUlHdr/DP8jyraQYzXWYmj0vIcepDhtB4T93RXkADE7qUvLmIA4AbMwXe+AYoKPJyuy KuBjT6MDP7cs5h7yp6xpUFBxpF8LudY8KzUzN5ZyQslvk3X6l/PtVR/FxkRTBp+Yo09H 385A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=odmnEFalqiK22jfM/QJXPnjUxCCn5vQWTWNj6w5owoo=; b=vVqunBJ2hub4FyKrLN+P2vkivgD2PjI8e3OaVR5KUzgbRnnvE30BNtHIQHvM2EThyB UydZkWkE0bEv+vlFBd2rNnbAt4JPbvbPeipjJrqSUV3OLb3oov8FeR8aNA+SaOKdfRjA kqGTZtc3o6sh5GTMeO0e7RnWYor9WQBA836i/VWzwPQl4P2HjTatmt3cqsc8lpTK4RFP 0wzDlKMMJbaFngY4TCxMwRtFwPDMQaZHPhGdknbSj45yxfSSa2AdJjT5n85XLyeRNtHX tBZJPVxv4UbfqVipMvEF6ucMacxvn9Rmh8sJUg3y2st21N0EqYboFjvHgHar1wJ9/NmJ BEFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dXovhgEb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hq2-20020a1709073f0200b007ad9f041c3bsi1999272ejc.27.2022.11.30.09.05.32; Wed, 30 Nov 2022 09:05:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dXovhgEb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229913AbiK3RCs (ORCPT + 99 others); Wed, 30 Nov 2022 12:02:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229782AbiK3RCL (ORCPT ); Wed, 30 Nov 2022 12:02:11 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3A258DBD3 for ; Wed, 30 Nov 2022 08:58:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827498; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=odmnEFalqiK22jfM/QJXPnjUxCCn5vQWTWNj6w5owoo=; b=dXovhgEbRY0CmClxcW+7i6NSGMrWpJxbsoRp+ZgerQrLEAVvcZfap6UeI2K5QBxrpTreA5 o9KXVcD+m7uYA3qmNWwEjsiwmO3Sr/3v54I84Ue8VxRfiTgcBL7alZJm1aJCSKhPyCuJEU wzr/sa4Yaxq5Yy7zMthJZfeK2ATKSMc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-103-brwVPcHqPdW2j2V7c0i-KQ-1; Wed, 30 Nov 2022 11:58:06 -0500 X-MC-Unique: brwVPcHqPdW2j2V7c0i-KQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 64A882932480; Wed, 30 Nov 2022 16:58:06 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id A5F28422A9; Wed, 30 Nov 2022 16:58:05 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 26/35] rxrpc: Remove RCU from peer->error_targets list From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:58:02 +0000 Message-ID: <166982748290.621383.837780227002133853.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941519709727224?= X-GMAIL-MSGID: =?utf-8?q?1750941519709727224?= Remove the RCU requirements from the peer's list of error targets so that the error distributor can call sleeping functions. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/call_accept.c | 6 ++++++ net/rxrpc/call_object.c | 2 +- net/rxrpc/conn_client.c | 4 ++++ net/rxrpc/conn_object.c | 6 +++--- net/rxrpc/output.c | 6 ------ net/rxrpc/peer_event.c | 15 ++++++++++++++- 6 files changed, 28 insertions(+), 11 deletions(-) diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 8bc327aa2beb..5f978b0b2404 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -433,6 +433,12 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, */ rxrpc_put_call(call, rxrpc_call_put_discard_prealloc); + if (hlist_unhashed(&call->error_link)) { + spin_lock(&call->peer->lock); + hlist_add_head(&call->error_link, &call->peer->error_targets); + spin_unlock(&call->peer->lock); + } + _leave(" = %p{%d}", call, call->debug_id); return call; diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 96a7edd3a842..7570b4e67bc5 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -442,7 +442,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, rcu_assign_pointer(conn->channels[chan].call, call); spin_lock(&conn->peer->lock); - hlist_add_head_rcu(&call->error_link, &conn->peer->error_targets); + hlist_add_head(&call->error_link, &conn->peer->error_targets); spin_unlock(&conn->peer->lock); rxrpc_start_call_timer(call); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index ab3dd22fadc0..3c7b1bdec0db 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -786,6 +786,10 @@ void rxrpc_expose_client_call(struct rxrpc_call *call) if (chan->call_counter >= INT_MAX) set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); trace_rxrpc_client(conn, channel, rxrpc_client_exposed); + + spin_lock(&call->peer->lock); + hlist_add_head(&call->error_link, &call->peer->error_targets); + spin_unlock(&call->peer->lock); } } diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index c2e05ea29f12..5a39255ea014 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -215,9 +215,9 @@ void rxrpc_disconnect_call(struct rxrpc_call *call) call->peer->cong_ssthresh = call->cong_ssthresh; if (!hlist_unhashed(&call->error_link)) { - spin_lock_bh(&call->peer->lock); - hlist_del_rcu(&call->error_link); - spin_unlock_bh(&call->peer->lock); + spin_lock(&call->peer->lock); + hlist_del_init(&call->error_link); + spin_unlock(&call->peer->lock); } if (rxrpc_is_client_call(call)) diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index c8147e50060b..71963b4523be 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -394,12 +394,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) _enter("%x,{%d}", txb->seq, txb->len); - if (hlist_unhashed(&call->error_link)) { - spin_lock_bh(&call->peer->lock); - hlist_add_head_rcu(&call->error_link, &call->peer->error_targets); - spin_unlock_bh(&call->peer->lock); - } - /* Each transmission of a Tx packet needs a new serial number */ serial = atomic_inc_return(&conn->serial); txb->wire.serial = htonl(serial); diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index 94f63fb1bd67..97d017ca3dc4 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -207,11 +207,24 @@ static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error, enum rxrpc_call_completion compl) { struct rxrpc_call *call; + HLIST_HEAD(error_targets); + + spin_lock(&peer->lock); + hlist_move_list(&peer->error_targets, &error_targets); + + while (!hlist_empty(&error_targets)) { + call = hlist_entry(error_targets.first, + struct rxrpc_call, error_link); + hlist_del_init(&call->error_link); + spin_unlock(&peer->lock); - hlist_for_each_entry_rcu(call, &peer->error_targets, error_link) { rxrpc_see_call(call, rxrpc_call_see_distribute_error); rxrpc_set_call_completion(call, compl, 0, -error); + + spin_lock(&peer->lock); } + + spin_unlock(&peer->lock); } /* From patchwork Wed Nov 30 16:58:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27899 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1041384wrr; Wed, 30 Nov 2022 09:03:57 -0800 (PST) X-Google-Smtp-Source: AA0mqf5Li17jGYa9pl6ZCM/MYPQfnmwOCbN0tb+SLC+/N/7kunDG/aMMdpQBnZSIhjs4MMNxkXcO X-Received: by 2002:a63:f211:0:b0:477:def7:58a3 with SMTP id v17-20020a63f211000000b00477def758a3mr24008179pgh.423.1669827837551; Wed, 30 Nov 2022 09:03:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827837; cv=none; d=google.com; s=arc-20160816; b=ZvD94+tDEU1BBYtvI2VJVfBPpFq3HP1Vkewx/GddOtgQ8kZIIPDt/sCqV2ZkemQSSH JrcKg0cKwAHBjvAVKqtj5cLYP/9Z0GYM7g5rBMS5lGfW/awDt4P0pLdGGh1xD54AOvSI YwA8T1WxHDywioKd+gZoSC4YmnSOiQxR0oAymBAp444aFOeP0muOVWANylfUIDiziGI8 zrvg2SR7YCbWLaWKfQmFF5UCnacDhczNjaXLYoW3eBfcUDGVhHCMsH5Tb9/HTVOQLmZi l4r4jII8liPZqPMtQcz0E1DO0SjV7YXmHL8OnEQNEgW6RhBd5AFQOFVQrG9mv/8qT3qF p6Bw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=U8gv5FwlfBLk2h9FYCfGchxbHxe+R+fxfyI8Nu42mz8=; b=j2gTHXaIoKdCvtCv2dG+KXHrenwl3Lv9SI8g8vTbd3cOFksk6/VdKDWQjJljpd5lue TNC/b2Bjuqa4ZIA8F1oPEJi0YD4+i7pMB5eOOgJGc/VZ5ke9fql76/GHqDi50rZBGcvJ UDpWYth/GX8dltaCPOdF8vmLgTo2FRzh9OPvdmjwaCS/bKlT+4/HtbKbLT54O3a1ggnp 1qqYmq0Glx+7fmLeETM2UXqr6EP4g5tdozLcxdCiw5WUdhSd1Ucy3rNd1yYmgLhuHF9g A1+D7OPnIkL1jLTHfpIr4u8+2C7LnsnMCFlLIVssV683/UCuYjuv5ZYO+NbUhPcPXP1R Amfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OBnfQl7i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q14-20020a631f4e000000b0042b4defce13si1612194pgm.344.2022.11.30.09.03.42; Wed, 30 Nov 2022 09:03:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OBnfQl7i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230368AbiK3RCx (ORCPT + 99 others); Wed, 30 Nov 2022 12:02:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230060AbiK3RCP (ORCPT ); Wed, 30 Nov 2022 12:02:15 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 737978DBE0 for ; Wed, 30 Nov 2022 08:58:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827501; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U8gv5FwlfBLk2h9FYCfGchxbHxe+R+fxfyI8Nu42mz8=; b=OBnfQl7ixla0L9pGeK4/TsrpF35g3KQoj7rm1PfxcTcu2iEjp/NuuFF5RIrOQpbgFCnjfP NcZN8336FTuPvfcUWs8YWV3GlCvak1Uh265qtMhSK0Ls9uxA2JS/X5FSmowc62Dr2kTa5X aOyOmoJqDOxU5Ukn7GUIgOjXs4w4maw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-661-TBpe4EGtM2KSG1lg2_cddg-1; Wed, 30 Nov 2022 11:58:15 -0500 X-MC-Unique: TBpe4EGtM2KSG1lg2_cddg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 288AC101E14C; Wed, 30 Nov 2022 16:58:15 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5099540C6EC4; Wed, 30 Nov 2022 16:58:14 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 27/35] rxrpc: Simplify skbuff accounting in receive path From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:58:11 +0000 Message-ID: <166982749155.621383.2075555283515135651.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941394538111031?= X-GMAIL-MSGID: =?utf-8?q?1750941394538111031?= A received skbuff needs a ref when it gets put on a call data queue or conn packet queue, and rxrpc_input_packet() and co. jump through a lot of hoops to avoid double-dropping the skbuff ref so that we can avoid getting a ref when we queue the packet. Change this so that the skbuff ref is unconditionally dropped by the caller of rxrpc_input_packet(). An additional ref is then taken on the packet if it is pushed onto a queue. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 3 +- net/rxrpc/input.c | 45 +++++++++++++-------------- net/rxrpc/io_thread.c | 70 +++++++++++++++++++----------------------- 3 files changed, 56 insertions(+), 62 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index c3043fbea0e6..82b1327c2ba6 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -28,6 +28,8 @@ EM(rxrpc_skb_eaten_by_unshare_nomem, "ETN unshar-nm") \ EM(rxrpc_skb_get_ack, "GET ack ") \ EM(rxrpc_skb_get_conn_work, "GET conn-work") \ + EM(rxrpc_skb_get_local_work, "GET locl-work") \ + EM(rxrpc_skb_get_reject_work, "GET rej-work ") \ EM(rxrpc_skb_get_to_recvmsg, "GET to-recv ") \ EM(rxrpc_skb_get_to_recvmsg_oos, "GET to-recv-o") \ EM(rxrpc_skb_new_encap_rcv, "NEW encap-rcv") \ @@ -39,7 +41,6 @@ EM(rxrpc_skb_put_error_report, "PUT error-rep") \ EM(rxrpc_skb_put_input, "PUT input ") \ EM(rxrpc_skb_put_jumbo_subpacket, "PUT jumbo-sub") \ - EM(rxrpc_skb_put_lose, "PUT lose ") \ EM(rxrpc_skb_put_purge, "PUT purge ") \ EM(rxrpc_skb_put_rotate, "PUT rotate ") \ EM(rxrpc_skb_put_unknown, "PUT unknown ") \ diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 036f02371051..42addbcf59f9 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -338,7 +338,8 @@ static void rxrpc_input_queue_data(struct rxrpc_call *call, struct sk_buff *skb, /* * Process a DATA packet. */ -static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb) +static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, + bool *_notify) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct sk_buff *oos; @@ -361,7 +362,7 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb) if (test_and_set_bit(RXRPC_CALL_RX_LAST, &call->flags) && seq + 1 != wtop) { rxrpc_proto_abort("LSN", call, seq); - goto err_free; + return; } } else { if (test_bit(RXRPC_CALL_RX_LAST, &call->flags) && @@ -369,7 +370,7 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb) pr_warn("Packet beyond last: c=%x q=%x window=%x-%x wlimit=%x\n", call->debug_id, seq, window, wtop, wlimit); rxrpc_proto_abort("LSA", call, seq); - goto err_free; + return; } } @@ -402,9 +403,11 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb) if (after(window, wtop)) wtop = window; + rxrpc_get_skb(skb, rxrpc_skb_get_to_recvmsg); + spin_lock(&call->recvmsg_queue.lock); rxrpc_input_queue_data(call, skb, window, wtop, rxrpc_receive_queue); - skb = NULL; + *_notify = true; while ((oos = skb_peek(&call->rx_oos_queue))) { struct rxrpc_skb_priv *osp = rxrpc_skb(oos); @@ -456,16 +459,17 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb) struct rxrpc_skb_priv *osp = rxrpc_skb(oos); if (after(osp->hdr.seq, seq)) { + rxrpc_get_skb(skb, rxrpc_skb_get_to_recvmsg_oos); __skb_queue_before(&call->rx_oos_queue, oos, skb); goto oos_queued; } } + rxrpc_get_skb(skb, rxrpc_skb_get_to_recvmsg_oos); __skb_queue_tail(&call->rx_oos_queue, skb); oos_queued: trace_rxrpc_receive(call, last ? rxrpc_receive_oos_last : rxrpc_receive_oos, sp->hdr.serial, sp->hdr.seq); - skb = NULL; } send_ack: @@ -483,9 +487,6 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb) else rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_input_data); - -err_free: - rxrpc_free_skb(skb, rxrpc_skb_put_input); } /* @@ -498,6 +499,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb struct sk_buff *jskb; unsigned int offset = sizeof(struct rxrpc_wire_header); unsigned int len = skb->len - offset; + bool notify = false; while (sp->hdr.flags & RXRPC_JUMBO_PACKET) { if (len < RXRPC_JUMBO_SUBPKTLEN) @@ -517,7 +519,8 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb jsp = rxrpc_skb(jskb); jsp->offset = offset; jsp->len = RXRPC_JUMBO_DATALEN; - rxrpc_input_data_one(call, jskb); + rxrpc_input_data_one(call, jskb, ¬ify); + rxrpc_free_skb(jskb, rxrpc_skb_put_jumbo_subpacket); sp->hdr.flags = jhdr.flags; sp->hdr._rsvd = ntohs(jhdr._rsvd); @@ -529,7 +532,11 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb sp->offset = offset; sp->len = len; - rxrpc_input_data_one(call, skb); + rxrpc_input_data_one(call, skb, ¬ify); + if (notify) { + trace_rxrpc_notify_socket(call->debug_id, sp->hdr.serial); + rxrpc_notify_socket(call); + } return true; protocol_error: @@ -552,10 +559,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) skb->len, seq0); state = READ_ONCE(call->state); - if (state >= RXRPC_CALL_COMPLETE) { - rxrpc_free_skb(skb, rxrpc_skb_put_input); + if (state >= RXRPC_CALL_COMPLETE) return; - } /* Unshare the packet so that it can be modified for in-place * decryption. @@ -605,7 +610,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) out: trace_rxrpc_notify_socket(call->debug_id, serial); rxrpc_notify_socket(call); - rxrpc_free_skb(skb, rxrpc_skb_put_input); _leave(" [queued]"); } @@ -797,7 +801,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) struct rxrpc_ackpacket ack; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_ackinfo info; - struct sk_buff *skb_old = NULL, *skb_put = skb; + struct sk_buff *skb_old = NULL; rxrpc_serial_t ack_serial, acked_serial; rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt; int nr_acks, offset, ioffset; @@ -963,6 +967,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) goto out; } + rxrpc_get_skb(skb, rxrpc_skb_get_ack); spin_lock(&call->acks_ack_lock); skb_old = call->acks_soft_tbl; call->acks_soft_tbl = skb; @@ -970,7 +975,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_input_soft_acks(call, skb->data + offset, first_soft_ack, nr_acks, &summary); - skb_put = NULL; } else if (call->acks_soft_tbl) { spin_lock(&call->acks_ack_lock); skb_old = call->acks_soft_tbl; @@ -986,7 +990,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_congestion_management(call, skb, &summary, acked_serial); out: - rxrpc_free_skb(skb_put, rxrpc_skb_put_input); rxrpc_free_skb(skb_old, rxrpc_skb_put_ack); } @@ -1037,11 +1040,11 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_DATA: rxrpc_input_data(call, skb); - goto no_free; + break; case RXRPC_PACKET_TYPE_ACK: rxrpc_input_ack(call, skb); - goto no_free; + break; case RXRPC_PACKET_TYPE_BUSY: /* Just ignore BUSY packets from the server; the retry and @@ -1061,10 +1064,6 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) default: break; } - - rxrpc_free_skb(skb, rxrpc_skb_put_input); -no_free: - _leave(""); } /* diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 2119941b6d6c..91b8ba5b90db 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -72,6 +72,7 @@ static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, { _enter("%p,%p", conn, skb); + rxrpc_get_skb(skb, rxrpc_skb_get_conn_work); skb_queue_tail(&conn->rx_queue, skb); rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work); } @@ -86,10 +87,9 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local, _enter("%p,%p", local, skb); if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { + rxrpc_get_skb(skb, rxrpc_skb_get_local_work); skb_queue_tail(&local->event_queue, skb); rxrpc_queue_local(local); - } else { - rxrpc_free_skb(skb, rxrpc_skb_put_input); } } @@ -99,10 +99,9 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local, static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) { if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { + rxrpc_get_skb(skb, rxrpc_skb_get_reject_work); skb_queue_tail(&local->reject_queue, skb); rxrpc_queue_local(local); - } else { - rxrpc_free_skb(skb, rxrpc_skb_put_input); } } @@ -153,7 +152,7 @@ static bool rxrpc_extract_abort(struct sk_buff *skb) /* * Process packets received on the local endpoint */ -static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) +static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) { struct rxrpc_connection *conn; struct rxrpc_channel *chan; @@ -161,6 +160,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) struct rxrpc_skb_priv *sp; struct rxrpc_peer *peer = NULL; struct rxrpc_sock *rx = NULL; + struct sk_buff *skb = *_skb; unsigned int channel; if (skb->tstamp == 0) @@ -181,7 +181,6 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) static int lose; if ((lose++ & 7) == 7) { trace_rxrpc_rx_lose(sp); - rxrpc_free_skb(skb, rxrpc_skb_put_lose); return 0; } } @@ -193,13 +192,13 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_VERSION: if (rxrpc_to_client(sp)) - goto discard; + return 0; rxrpc_post_packet_to_local(local, skb); - goto out; + return 0; case RXRPC_PACKET_TYPE_BUSY: if (rxrpc_to_server(sp)) - goto discard; + return 0; fallthrough; case RXRPC_PACKET_TYPE_ACK: case RXRPC_PACKET_TYPE_ACKALL: @@ -208,7 +207,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) break; case RXRPC_PACKET_TYPE_ABORT: if (!rxrpc_extract_abort(skb)) - return true; /* Just discard if malformed */ + return 0; /* Just discard if malformed */ break; case RXRPC_PACKET_TYPE_DATA: @@ -220,15 +219,16 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) * decryption. */ if (sp->hdr.securityIndex != 0) { - struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC); - if (!nskb) { - rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem); - goto out; + skb = skb_unshare(skb, GFP_ATOMIC); + if (!skb) { + rxrpc_eaten_skb(*_skb, rxrpc_skb_eaten_by_unshare_nomem); + *_skb = NULL; + return 0; } - if (nskb != skb) { - rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare); - skb = nskb; + if (skb != *_skb) { + rxrpc_eaten_skb(*_skb, rxrpc_skb_eaten_by_unshare); + *_skb = skb; rxrpc_new_skb(skb, rxrpc_skb_new_unshared); sp = rxrpc_skb(skb); } @@ -237,18 +237,18 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) case RXRPC_PACKET_TYPE_CHALLENGE: if (rxrpc_to_server(sp)) - goto discard; + return 0; break; case RXRPC_PACKET_TYPE_RESPONSE: if (rxrpc_to_client(sp)) - goto discard; + return 0; break; /* Packet types 9-11 should just be ignored. */ case RXRPC_PACKET_TYPE_PARAMS: case RXRPC_PACKET_TYPE_10: case RXRPC_PACKET_TYPE_11: - goto discard; + return 0; default: goto bad_message; @@ -268,7 +268,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA && sp->hdr.seq == 1) goto unsupported_service; - goto discard; + return 0; } } @@ -294,7 +294,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) /* Connection-level packet */ _debug("CONN %p {%d}", conn, conn->debug_id); rxrpc_post_packet_to_conn(conn, skb); - goto out; + return 0; } if ((int)sp->hdr.serial - (int)conn->hi_serial > 0) @@ -306,19 +306,19 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) /* Ignore really old calls */ if (sp->hdr.callNumber < chan->last_call) - goto discard; + return 0; if (sp->hdr.callNumber == chan->last_call) { if (chan->call || sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) - goto discard; + return 0; /* For the previous service call, if completed * successfully, we discard all further packets. */ if (rxrpc_conn_is_service(conn) && chan->last_type == RXRPC_PACKET_TYPE_ACK) - goto discard; + return 0; /* But otherwise we need to retransmit the final packet * from data cached in the connection record. @@ -329,7 +329,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) sp->hdr.serial, sp->hdr.flags); rxrpc_post_packet_to_conn(conn, skb); - goto out; + return 0; } call = rcu_dereference(chan->call); @@ -357,21 +357,14 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) sp->hdr.type != RXRPC_PACKET_TYPE_DATA) goto bad_message; if (sp->hdr.seq != 1) - goto discard; + return 0; call = rxrpc_new_incoming_call(local, rx, skb); if (!call) goto reject_packet; } - /* Process a call packet; this either discards or passes on the ref - * elsewhere. - */ + /* Process a call packet. */ rxrpc_input_call_event(call, skb); - goto out; - -discard: - rxrpc_free_skb(skb, rxrpc_skb_put_input); -out: trace_rxrpc_rx_done(0, 0); return 0; @@ -400,9 +393,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff *skb) post_abort: skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; reject_packet: - trace_rxrpc_rx_done(skb->mark, skb->priority); rxrpc_reject_packet(local, skb); - _leave(" [badmsg]"); return 0; } @@ -441,9 +432,12 @@ int rxrpc_io_thread(void *data) if ((skb = __skb_dequeue(&rx_queue))) { switch (skb->mark) { case RXRPC_SKB_MARK_PACKET: + skb->priority = 0; rcu_read_lock(); - rxrpc_input_packet(local, skb); + rxrpc_input_packet(local, &skb); rcu_read_unlock(); + trace_rxrpc_rx_done(skb->mark, skb->priority); + rxrpc_free_skb(skb, rxrpc_skb_put_input); break; case RXRPC_SKB_MARK_ERROR: rxrpc_input_error(local, skb); From patchwork Wed Nov 30 16:58:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27910 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042962wrr; Wed, 30 Nov 2022 09:06:13 -0800 (PST) X-Google-Smtp-Source: AA0mqf59Nr2Xa1bw5aFd38xNEz+p5d6FUk1SJYlDVNqUE8XGSpaIBmPG8hCOkb15xzS/gm3kH3AW X-Received: by 2002:aa7:da4d:0:b0:46b:4156:bf29 with SMTP id w13-20020aa7da4d000000b0046b4156bf29mr12526470eds.246.1669827972851; Wed, 30 Nov 2022 09:06:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827972; cv=none; d=google.com; s=arc-20160816; b=Z17gtU30SQxszGIFR2toiL1VwwVQizDkDUDNdvCBW/c+CZ7vZre5pOgmmYJwVAisuj qHslKzoMkV4Tl5LLbvvFxIwiBZbmM5h/MwaG6KYZHvzS8/O4arQQJnP6nh7tbfWL0Te/ K2ISHp3w7apZsfdWUhjg/bObpW5fikkq0BHz5fu1Vdz7wiiyla/HRrokBHXV/t4+vSGj V/TyRhHxMWtuTi65y49xY/9G4BUqMaG3fplzUF1PjF3qfZlHb/eFbBdmEALFZL2M/hWn cBSGoQKnbs2kXnf249DikrKnh0khmTuKmS4JUQ1MnSWbsAbn8byAARsowwqLBmU3rFEv f4tQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=0ItRdbm0UyoTawpAty3EqeXI5my06iwbuvMc90NvWqA=; b=zbw/vyDsn39rqurz3DHK6hTBCwdS3kHAMzRlfDiesQc7cWM1nfkIx6SbKEbuXSfQJ0 5eDwN9awqrRnYpgOeVU+d4xTNmnLepy5WTLUYsM9UBcV5kO8PX0TCUx5leqfku9nZxvY vBOo3qwP84/fbbpvuxvGk/BW91TnXpRSDO8SwEm9ZoYnvMFnGxflmf1wr2BloTlwBxCq s7L6hr30RF0v5pG6LWw2Qz4NvA1JhCHDzPo3GC9m9S8SSs47aDREwNK3oQ7ecnOnG5Gw netthvZaAvrc0dVZ9c4MbyH54HjIJH/Qd4PzCCKg3P1xEGewUvPt7EM0OEZgI0fOSRRQ TGWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ed3yDF+q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a13-20020a170906684d00b007bf848e0a05si1256141ejs.912.2022.11.30.09.05.45; Wed, 30 Nov 2022 09:06:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ed3yDF+q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231151AbiK3RC4 (ORCPT + 99 others); Wed, 30 Nov 2022 12:02:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230215AbiK3RC3 (ORCPT ); Wed, 30 Nov 2022 12:02:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC09191C0E for ; Wed, 30 Nov 2022 08:58:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827505; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ItRdbm0UyoTawpAty3EqeXI5my06iwbuvMc90NvWqA=; b=ed3yDF+qT3dx6BzXi6GoULmqFP0XCtir9DiMA6qeXCfLcEKB+jmfxZMlHkC9S5TmY17g4H LcHXPFict4LnpCzsap/TyR09Hp6WR5uw685oVeFSiZQvd/4VX17O3ZrEARuT/qsp5rCOcv wDai/qnqTSyMRrgD3dEG5XqgJUt9xa8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-256-cnrEHAC4MuGSTptiFkx5vA-1; Wed, 30 Nov 2022 11:58:24 -0500 X-MC-Unique: cnrEHAC4MuGSTptiFkx5vA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C91732932480; Wed, 30 Nov 2022 16:58:23 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 159531415119; Wed, 30 Nov 2022 16:58:22 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 28/35] rxrpc: Reduce the use of RCU in packet input From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:58:20 +0000 Message-ID: <166982750031.621383.17246231076247717914.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941536546116115?= X-GMAIL-MSGID: =?utf-8?q?1750941536546116115?= Shrink the region of rxrpc_input_packet() that is covered by the RCU read lock so that it only covers the connection and call lookup. This means that the bits now outside of that can call sleepable functions such as kmalloc and sendmsg. Also take a ref on the conn or call we're going to use before we drop the RCU read lock. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 3 +- net/rxrpc/call_accept.c | 13 ++------- net/rxrpc/input.c | 7 ++--- net/rxrpc/io_thread.c | 68 ++++++++++++++++++++++++++++++++++++----------- 4 files changed, 59 insertions(+), 32 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 6af7298af39b..cfd16f1e5c83 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -961,8 +961,7 @@ void rxrpc_unpublish_service_conn(struct rxrpc_connection *); * input.c */ void rxrpc_input_call_event(struct rxrpc_call *, struct sk_buff *); -void rxrpc_input_implicit_end_call(struct rxrpc_sock *, struct rxrpc_connection *, - struct rxrpc_call *); +void rxrpc_input_implicit_end_call(struct rxrpc_connection *, struct rxrpc_call *); /* * io_thread.c diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 5f978b0b2404..beb8efa2e7a9 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -336,13 +336,13 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, * If this is for a kernel service, when we allocate the call, it will have * three refs on it: (1) the kernel service, (2) the user_call_ID tree, (3) the * retainer ref obtained from the backlog buffer. Prealloc calls for userspace - * services only have the ref from the backlog buffer. We want to pass this - * ref to non-BH context to dispose of. + * services only have the ref from the backlog buffer. We pass this ref to the + * caller. * * If we want to report an error, we mark the skb with the packet type and * abort code and return NULL. * - * The call is returned with the user access mutex held. + * The call is returned with the user access mutex held and a ref on it. */ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, struct rxrpc_sock *rx, @@ -426,13 +426,6 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, rxrpc_send_ping(call, skb); - /* We have to discard the prealloc queue's ref here and rely on a - * combination of the RCU read lock and refs held either by the socket - * (recvmsg queue, to-be-accepted queue or user ID tree) or the kernel - * service to prevent the call from being deallocated too early. - */ - rxrpc_put_call(call, rxrpc_call_put_discard_prealloc); - if (hlist_unhashed(&call->error_link)) { spin_lock(&call->peer->lock); hlist_add_head(&call->error_link, &call->peer->error_targets); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 42addbcf59f9..01d32f817a7a 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1072,8 +1072,7 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) * * TODO: If callNumber > call_id + 1, renegotiate security. */ -void rxrpc_input_implicit_end_call(struct rxrpc_sock *rx, - struct rxrpc_connection *conn, +void rxrpc_input_implicit_end_call(struct rxrpc_connection *conn, struct rxrpc_call *call) { switch (READ_ONCE(call->state)) { @@ -1091,7 +1090,7 @@ void rxrpc_input_implicit_end_call(struct rxrpc_sock *rx, break; } - spin_lock(&rx->incoming_lock); + spin_lock(&conn->bundle->channel_lock); __rxrpc_disconnect_call(conn, call); - spin_unlock(&rx->incoming_lock); + spin_unlock(&conn->bundle->channel_lock); } diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 91b8ba5b90db..3b6927610677 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -257,6 +257,8 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) if (sp->hdr.serviceId == 0) goto bad_message; + rcu_read_lock(); + if (rxrpc_to_server(sp)) { /* Weed out packets to services we're not offering. Packets * that would begin a call are explicitly rejected and the rest @@ -264,7 +266,9 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) */ rx = rcu_dereference(local->service); if (!rx || (sp->hdr.serviceId != rx->srx.srx_service && - sp->hdr.serviceId != rx->second_service)) { + sp->hdr.serviceId != rx->second_service) + ) { + rcu_read_unlock(); if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA && sp->hdr.seq == 1) goto unsupported_service; @@ -293,7 +297,12 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) if (sp->hdr.callNumber == 0) { /* Connection-level packet */ _debug("CONN %p {%d}", conn, conn->debug_id); - rxrpc_post_packet_to_conn(conn, skb); + conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_conn_input); + rcu_read_unlock(); + if (conn) { + rxrpc_post_packet_to_conn(conn, skb); + rxrpc_put_connection(conn, rxrpc_conn_put_conn_input); + } return 0; } @@ -305,20 +314,26 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) chan = &conn->channels[channel]; /* Ignore really old calls */ - if (sp->hdr.callNumber < chan->last_call) + if (sp->hdr.callNumber < chan->last_call) { + rcu_read_unlock(); return 0; + } if (sp->hdr.callNumber == chan->last_call) { if (chan->call || - sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) + sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) { + rcu_read_unlock(); return 0; + } /* For the previous service call, if completed * successfully, we discard all further packets. */ if (rxrpc_conn_is_service(conn) && - chan->last_type == RXRPC_PACKET_TYPE_ACK) + chan->last_type == RXRPC_PACKET_TYPE_ACK) { + rcu_read_unlock(); return 0; + } /* But otherwise we need to retransmit the final packet * from data cached in the connection record. @@ -328,20 +343,32 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) sp->hdr.seq, sp->hdr.serial, sp->hdr.flags); - rxrpc_post_packet_to_conn(conn, skb); + conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_call_input); + rcu_read_unlock(); + if (conn) { + rxrpc_post_packet_to_conn(conn, skb); + rxrpc_put_connection(conn, rxrpc_conn_put_call_input); + } return 0; } call = rcu_dereference(chan->call); if (sp->hdr.callNumber > chan->call_id) { - if (rxrpc_to_client(sp)) + if (rxrpc_to_client(sp)) { + rcu_read_unlock(); goto reject_packet; - if (call) - rxrpc_input_implicit_end_call(rx, conn, call); - call = NULL; + } + if (call) { + rxrpc_input_implicit_end_call(conn, call); + chan->call = NULL; + call = NULL; + } } + if (call && !rxrpc_try_get_call(call, rxrpc_call_get_input)) + call = NULL; + if (call) { if (sp->hdr.serviceId != call->dest_srx.srx_service) call->dest_srx.srx_service = sp->hdr.serviceId; @@ -352,23 +379,33 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) } } - if (!call || refcount_read(&call->ref) == 0) { + if (!call) { if (rxrpc_to_client(sp) || - sp->hdr.type != RXRPC_PACKET_TYPE_DATA) + sp->hdr.type != RXRPC_PACKET_TYPE_DATA) { + rcu_read_unlock(); goto bad_message; - if (sp->hdr.seq != 1) + } + if (sp->hdr.seq != 1) { + rcu_read_unlock(); return 0; + } call = rxrpc_new_incoming_call(local, rx, skb); - if (!call) + if (!call) { + rcu_read_unlock(); goto reject_packet; + } } + rcu_read_unlock(); + /* Process a call packet. */ rxrpc_input_call_event(call, skb); + rxrpc_put_call(call, rxrpc_call_put_input); trace_rxrpc_rx_done(0, 0); return 0; wrong_security: + rcu_read_unlock(); trace_rxrpc_abort(0, "SEC", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, RXKADINCONSISTENCY, EBADMSG); skb->priority = RXKADINCONSISTENCY; @@ -381,6 +418,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) goto post_abort; reupgrade: + rcu_read_unlock(); trace_rxrpc_abort(0, "UPG", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, RX_PROTOCOL_ERROR, EBADMSG); goto protocol_error; @@ -433,9 +471,7 @@ int rxrpc_io_thread(void *data) switch (skb->mark) { case RXRPC_SKB_MARK_PACKET: skb->priority = 0; - rcu_read_lock(); rxrpc_input_packet(local, &skb); - rcu_read_unlock(); trace_rxrpc_rx_done(skb->mark, skb->priority); rxrpc_free_skb(skb, rxrpc_skb_put_input); break; From patchwork Wed Nov 30 16:58:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27905 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042097wrr; Wed, 30 Nov 2022 09:04:56 -0800 (PST) X-Google-Smtp-Source: AA0mqf7RSZ1jGywJh4V6xxvliQZB1dfBYQJVebW/0FjDItQ/+OzQND0P3xsnSOC4OP9u32wwdBYy X-Received: by 2002:a17:907:d40e:b0:7bb:f10c:9282 with SMTP id vi14-20020a170907d40e00b007bbf10c9282mr25302059ejc.325.1669827895841; Wed, 30 Nov 2022 09:04:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827895; cv=none; d=google.com; s=arc-20160816; b=nECM4xLisJtJj+HDWFqVbWRj0I1Rq57mhS+vE5OsauHlUk1qEn+FPP/0XBO+5jdEWl rvJTOU7oWJGFsRJT02BKX4I1ik/93qDxJUSpKCtReB3Qy+g3YZhpElvoaoiJ/I/fQ6ZZ JaTtUMz9fM8pODPIp4VE0sVhUriQG+CtKWpNNjHxqNf8HP71Ks2PnKBT8W6pmSj4l1tk AQ+pCl8dmVaoXyBpiopF66UAPzIGrcfP2Zi+ZKZucP7pSeQJ8PcvQD/VqkVL8FFzVFwf 1Skym4OA+Q4zrp6ixbiHyaE8D5/kfyH9iQLtBgURHayNsfBaKFSWUU3sq2mewBVLzmmf qlQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=L5K0/1KSpPMNNElCfjhINDPpkDoq54xRghHeiwJooY0=; b=iFoswZB88w6ZfYytVQ0jld0lRk1XELop7PFP84g2chmIvAkt+txY2r/8IcHO8pQgg7 X+oO2hv7zHYcfhEputrBNlbaTzti4OYWLZdF2X5MBQlU49sV+kfvyQiO+EV9cH6AXjnK SQIhdTkOMe3je3MHY/Ja/nUzNO9FpPVF+1RsHN+EWNIJUJ4WHugJGFncE5pNGklVBS4S ctFemP5Q5qgdW5ZvFzZelPz2+umTFuzdlb708ja5Haw3GvREuASB9jUdgabzewfS72np DYikr1IE6DT0UjF5A35gHiZPCFO+ClSr64u3/ctCk9agr2mE5qVoFCo5LRVVA7h+EsdF BrDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YGqG+9hX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wu14-20020a170906eece00b007701a050273si1692735ejb.942.2022.11.30.09.04.31; Wed, 30 Nov 2022 09:04:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YGqG+9hX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230521AbiK3RDf (ORCPT + 99 others); Wed, 30 Nov 2022 12:03:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229901AbiK3RCs (ORCPT ); Wed, 30 Nov 2022 12:02:48 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8987983273 for ; Wed, 30 Nov 2022 08:58:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827516; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L5K0/1KSpPMNNElCfjhINDPpkDoq54xRghHeiwJooY0=; b=YGqG+9hXUCNGKidd8hOxx1QoPNdF/GqhZ9H66P80+cEGohCkiL8RwSf1JC1gh/eMKLCxWH A4wNkIOWFWfz4FI/sqycTRN6ueu9/ayqELp3UsFWn21VXTMfX4rmGhMe8ufzgTzSpHGfwC hNepbTnxiPjSJvUn+cEQonlj1tlpFzI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-460-oTuaG725MuCM8CAuTZqj6g-1; Wed, 30 Nov 2022 11:58:32 -0500 X-MC-Unique: oTuaG725MuCM8CAuTZqj6g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7545B858F13; Wed, 30 Nov 2022 16:58:32 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id B5EBA2024CC0; Wed, 30 Nov 2022 16:58:31 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 29/35] rxrpc: Extract the peer address from an incoming packet earlier From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:58:28 +0000 Message-ID: <166982750897.621383.7304976593934044192.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941455762058772?= X-GMAIL-MSGID: =?utf-8?q?1750941455762058772?= Extract the peer address from an incoming packet earlier, at the beginning of rxrpc_input_packet() and thence pass a pointer to it to various functions that use it as part of the lookup rather than doing it on several separate paths. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 2 ++ net/rxrpc/call_accept.c | 10 ++++++---- net/rxrpc/conn_object.c | 29 ++++++++--------------------- net/rxrpc/io_thread.c | 17 +++++++++++++++-- 4 files changed, 31 insertions(+), 27 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index cfd16f1e5c83..c3c915a05627 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -824,6 +824,7 @@ int rxrpc_service_prealloc(struct rxrpc_sock *, gfp_t); void rxrpc_discard_prealloc(struct rxrpc_sock *); struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *, struct rxrpc_sock *, + struct sockaddr_rxrpc *, struct sk_buff *); void rxrpc_accept_incoming_calls(struct rxrpc_local *); int rxrpc_user_charge_accept(struct rxrpc_sock *, unsigned long); @@ -916,6 +917,7 @@ extern unsigned int rxrpc_closed_conn_expiry; struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *, gfp_t); struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *, + struct sockaddr_rxrpc *, struct sk_buff *, struct rxrpc_peer **); void __rxrpc_disconnect_call(struct rxrpc_connection *, struct rxrpc_call *); diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index beb8efa2e7a9..11134b7cec17 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -258,6 +258,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, struct rxrpc_peer *peer, struct rxrpc_connection *conn, const struct rxrpc_security *sec, + struct sockaddr_rxrpc *peer_srx, struct sk_buff *skb) { struct rxrpc_backlog *b = rx->backlog; @@ -287,8 +288,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, peer = NULL; if (!peer) { peer = b->peer_backlog[peer_tail]; - if (rxrpc_extract_addr_from_skb(&peer->srx, skb) < 0) - return NULL; + peer->srx = *peer_srx; b->peer_backlog[peer_tail] = NULL; smp_store_release(&b->peer_backlog_tail, (peer_tail + 1) & @@ -346,6 +346,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, */ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, struct rxrpc_sock *rx, + struct sockaddr_rxrpc *peer_srx, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); @@ -371,7 +372,7 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, * we have to recheck the routing. However, we're now holding * rx->incoming_lock, so the values should remain stable. */ - conn = rxrpc_find_connection_rcu(local, skb, &peer); + conn = rxrpc_find_connection_rcu(local, peer_srx, skb, &peer); if (!conn) { sec = rxrpc_get_incoming_security(rx, skb); @@ -379,7 +380,8 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, goto no_call; } - call = rxrpc_alloc_incoming_call(rx, local, peer, conn, sec, skb); + call = rxrpc_alloc_incoming_call(rx, local, peer, conn, sec, peer_srx, + skb); if (!call) { skb->mark = RXRPC_SKB_MARK_REJECT_BUSY; goto no_call; diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 5a39255ea014..98e49646ca1d 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -73,29 +73,17 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet, * The caller must be holding the RCU read lock. */ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local, + struct sockaddr_rxrpc *srx, struct sk_buff *skb, struct rxrpc_peer **_peer) { struct rxrpc_connection *conn; struct rxrpc_conn_proto k; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - struct sockaddr_rxrpc srx; struct rxrpc_peer *peer; _enter(",%x", sp->hdr.cid & RXRPC_CIDMASK); - if (rxrpc_extract_addr_from_skb(&srx, skb) < 0) - goto not_found; - - if (srx.transport.family != local->srx.transport.family && - (srx.transport.family == AF_INET && - local->srx.transport.family != AF_INET6)) { - pr_warn_ratelimited("AF_RXRPC: Protocol mismatch %u not %u\n", - srx.transport.family, - local->srx.transport.family); - goto not_found; - } - k.epoch = sp->hdr.epoch; k.cid = sp->hdr.cid & RXRPC_CIDMASK; @@ -104,7 +92,7 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local, * parameter set. We look up the peer first as an intermediate * step and then the connection from the peer's tree. */ - peer = rxrpc_lookup_peer_rcu(local, &srx); + peer = rxrpc_lookup_peer_rcu(local, srx); if (!peer) goto not_found; *_peer = peer; @@ -117,8 +105,7 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local, /* Look up client connections by connection ID alone as their * IDs are unique for this machine. */ - conn = idr_find(&rxrpc_client_conn_ids, - sp->hdr.cid >> RXRPC_CIDSHIFT); + conn = idr_find(&rxrpc_client_conn_ids, sp->hdr.cid >> RXRPC_CIDSHIFT); if (!conn || refcount_read(&conn->ref) == 0) { _debug("no conn"); goto not_found; @@ -129,20 +116,20 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local, goto not_found; peer = conn->peer; - switch (srx.transport.family) { + switch (srx->transport.family) { case AF_INET: if (peer->srx.transport.sin.sin_port != - srx.transport.sin.sin_port || + srx->transport.sin.sin_port || peer->srx.transport.sin.sin_addr.s_addr != - srx.transport.sin.sin_addr.s_addr) + srx->transport.sin.sin_addr.s_addr) goto not_found; break; #ifdef CONFIG_AF_RXRPC_IPV6 case AF_INET6: if (peer->srx.transport.sin6.sin6_port != - srx.transport.sin6.sin6_port || + srx->transport.sin6.sin6_port || memcmp(&peer->srx.transport.sin6.sin6_addr, - &srx.transport.sin6.sin6_addr, + &srx->transport.sin6.sin6_addr, sizeof(struct in6_addr)) != 0) goto not_found; break; diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 3b6927610677..bc65d83fab88 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -155,6 +155,7 @@ static bool rxrpc_extract_abort(struct sk_buff *skb) static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) { struct rxrpc_connection *conn; + struct sockaddr_rxrpc peer_srx; struct rxrpc_channel *chan; struct rxrpc_call *call = NULL; struct rxrpc_skb_priv *sp; @@ -257,6 +258,18 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) if (sp->hdr.serviceId == 0) goto bad_message; + if (WARN_ON_ONCE(rxrpc_extract_addr_from_skb(&peer_srx, skb) < 0)) + return 0; /* Unsupported address type - discard. */ + + if (peer_srx.transport.family != local->srx.transport.family && + (peer_srx.transport.family == AF_INET && + local->srx.transport.family != AF_INET6)) { + pr_warn_ratelimited("AF_RXRPC: Protocol mismatch %u not %u\n", + peer_srx.transport.family, + local->srx.transport.family); + return 0; /* Wrong address type - discard. */ + } + rcu_read_lock(); if (rxrpc_to_server(sp)) { @@ -276,7 +289,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) } } - conn = rxrpc_find_connection_rcu(local, skb, &peer); + conn = rxrpc_find_connection_rcu(local, &peer_srx, skb, &peer); if (conn) { if (sp->hdr.securityIndex != conn->security_ix) goto wrong_security; @@ -389,7 +402,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) rcu_read_unlock(); return 0; } - call = rxrpc_new_incoming_call(local, rx, skb); + call = rxrpc_new_incoming_call(local, rx, &peer_srx, skb); if (!call) { rcu_read_unlock(); goto reject_packet; From patchwork Wed Nov 30 16:58:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27906 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042170wrr; Wed, 30 Nov 2022 09:05:03 -0800 (PST) X-Google-Smtp-Source: AA0mqf4Y0ORN1n7jQp83Pl/NIw2euwWecjMWHD8THO/uCkLrGgRF5Lo8arS1kLL6KGfNzNVnts4i X-Received: by 2002:aa7:c6da:0:b0:469:172:1f38 with SMTP id b26-20020aa7c6da000000b0046901721f38mr54844312eds.195.1669827903457; Wed, 30 Nov 2022 09:05:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827903; cv=none; d=google.com; s=arc-20160816; b=lQWt9gQKLSi+hewPxZWBj9aJupyUuM5uNkWIqWs1pwwou3leTL2x12mZO/aCWIz2DC Ij37Ynih+r54FnB4UvksOSvTPabVwULoFutfY3cU+pJH7AV13CBEpv4bPJDxhpVkZ6gx TsE1CmIHNmkf+NjLJdYFxzXZFDct+9FgGpt7V/FqwT+AxC2Rl9RFH8O5McF5GwG9Gf/9 p+L1YzRG0BzajjUxqROS88dYF8OCZWCDnH0das6KNDuBp2v0iY1TcfhSdL9B8bEXkP1j 6x26YD4uyq5YEGP+nQWqLF0lTA/+IhX4k3r8QmTl4gDHXOn4FrpfgDsG2qlQS498uhXj 6YJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=jha3RsWmlSpHzmxewi31DkMbpLfg+Nc4JE79oTAKtPk=; b=QbwvRSDLfVlGlHcPv0PCSn/cY4iz73OT2pkCrcSaN4ZCWzBklRc9He5AAB9BoIkw2Z D2wC4YmJVCe4TzJKfXNdf5ybiWaTbdIwia4gF2SnjyZhcMBDp/ky5j9db7tr8FDzVQBs RlK4d1UdTgSD20l/4YIPwOz6UEoSpESS3zFuXNiYskzDhCM6RvRPl0u13YagW5v2DB7U RDRa3q0czYx3yJhZ4Jb9suHZzgxSZE4fyJ6XjwROPNg6pc3GbhgY4knpOZB+BXvTr++e AbR3yaO+kSFqASOji0D1OsyajcrEqOgrbIfRZlX8K2LLovHcyPzCCZPQ8CeYvun9FsoA vPAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=cj3JZIj3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bo13-20020a0564020b2d00b0046184b7c4besi1440624edb.462.2022.11.30.09.04.37; Wed, 30 Nov 2022 09:05:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=cj3JZIj3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231148AbiK3RDj (ORCPT + 99 others); Wed, 30 Nov 2022 12:03:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230215AbiK3RDA (ORCPT ); Wed, 30 Nov 2022 12:03:00 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 986248C467 for ; Wed, 30 Nov 2022 08:58:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827525; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jha3RsWmlSpHzmxewi31DkMbpLfg+Nc4JE79oTAKtPk=; b=cj3JZIj3/UAR1vmU644AUm/BXsVkF2yYal+LUyIJOPuIkNXsR3Ci7qSlMUwn4nrAVFfHeq 0FLvPXbB87jWb2cy5PjZm8OmeOJU2UNrOEUVJlUUYbAxqHbcVh4y4ByaxPdQSUcOvshYyK ZRW0CSCzF3Sow8EHLGtW/An28oq3dRg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-582-ofV-rgLGPDe99dp06amS9w-1; Wed, 30 Nov 2022 11:58:41 -0500 X-MC-Unique: ofV-rgLGPDe99dp06amS9w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 237B0101A52A; Wed, 30 Nov 2022 16:58:41 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 14643492B04; Wed, 30 Nov 2022 16:58:39 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 30/35] rxrpc: Make the I/O thread take over the call and local processor work From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:58:37 +0000 Message-ID: <166982751762.621383.5848583443739953259.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941463561518954?= X-GMAIL-MSGID: =?utf-8?q?1750941463561518954?= Move the functions from the call->processor and local->processor work items into the domain of the I/O thread. The call event processor, now called from the I/O thread, then takes over the job of cranking the call state machine, processing incoming packets and transmitting DATA, ACK and ABORT packets. In a future patch, rxrpc_send_ACK() will transmit the ACK on the spot rather than queuing it for later transmission. The call event processor becomes purely received-skb driven. It only transmits things in response to events. We use "pokes" to queue a dummy skb to make it do things like start/resume transmitting data. Timer expiry also results in pokes. The connection event processor, becomes similar, though crypto events, such as dealing with CHALLENGE and RESPONSE packets is offloaded to a work item to avoid doing crypto in the I/O thread. The local event processor is removed and VERSION response packets are generated directly from the packet parser. Similarly, ABORTs generated in response to protocol errors will be transmitted immediately rather than being pushed onto a queue for later transmission. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 42 +++--- net/rxrpc/ar-internal.h | 50 +++---- net/rxrpc/call_accept.c | 126 ++++++++--------- net/rxrpc/call_event.c | 171 +++++++++-------------- net/rxrpc/call_object.c | 56 ++++--- net/rxrpc/conn_event.c | 60 ++++++++ net/rxrpc/conn_object.c | 93 +++++------- net/rxrpc/input.c | 167 ++++++---------------- net/rxrpc/io_thread.c | 319 +++++++++++++++++++----------------------- net/rxrpc/local_event.c | 43 ------ net/rxrpc/local_object.c | 69 --------- net/rxrpc/output.c | 92 +++++------- net/rxrpc/peer_event.c | 29 ++-- net/rxrpc/recvmsg.c | 9 - net/rxrpc/sendmsg.c | 10 + 15 files changed, 545 insertions(+), 791 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 82b1327c2ba6..c49b0c233594 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -26,7 +26,6 @@ #define rxrpc_skb_traces \ EM(rxrpc_skb_eaten_by_unshare, "ETN unshare ") \ EM(rxrpc_skb_eaten_by_unshare_nomem, "ETN unshar-nm") \ - EM(rxrpc_skb_get_ack, "GET ack ") \ EM(rxrpc_skb_get_conn_work, "GET conn-work") \ EM(rxrpc_skb_get_local_work, "GET locl-work") \ EM(rxrpc_skb_get_reject_work, "GET rej-work ") \ @@ -36,7 +35,6 @@ EM(rxrpc_skb_new_error_report, "NEW error-rpt") \ EM(rxrpc_skb_new_jumbo_subpacket, "NEW jumbo-sub") \ EM(rxrpc_skb_new_unshared, "NEW unshared ") \ - EM(rxrpc_skb_put_ack, "PUT ack ") \ EM(rxrpc_skb_put_conn_work, "PUT conn-work") \ EM(rxrpc_skb_put_error_report, "PUT error-rep") \ EM(rxrpc_skb_put_input, "PUT input ") \ @@ -45,7 +43,6 @@ EM(rxrpc_skb_put_rotate, "PUT rotate ") \ EM(rxrpc_skb_put_unknown, "PUT unknown ") \ EM(rxrpc_skb_see_conn_work, "SEE conn-work") \ - EM(rxrpc_skb_see_local_work, "SEE locl-work") \ EM(rxrpc_skb_see_recvmsg, "SEE recvmsg ") \ EM(rxrpc_skb_see_reject, "SEE reject ") \ EM(rxrpc_skb_see_rotate, "SEE rotate ") \ @@ -58,10 +55,7 @@ EM(rxrpc_local_get_for_use, "GET for-use ") \ EM(rxrpc_local_get_peer, "GET peer ") \ EM(rxrpc_local_get_prealloc_conn, "GET conn-pre") \ - EM(rxrpc_local_get_queue, "GET queue ") \ EM(rxrpc_local_new, "NEW ") \ - EM(rxrpc_local_processing, "PROCESSING ") \ - EM(rxrpc_local_put_already_queued, "PUT alreadyq") \ EM(rxrpc_local_put_bind, "PUT bind ") \ EM(rxrpc_local_put_call, "PUT call ") \ EM(rxrpc_local_put_for_use, "PUT for-use ") \ @@ -69,8 +63,6 @@ EM(rxrpc_local_put_peer, "PUT peer ") \ EM(rxrpc_local_put_prealloc_conn, "PUT conn-pre") \ EM(rxrpc_local_put_release_sock, "PUT rel-sock") \ - EM(rxrpc_local_put_queue, "PUT queue ") \ - EM(rxrpc_local_queued, "QUEUED ") \ EM(rxrpc_local_see_tx_ack, "SEE tx-ack ") \ EM(rxrpc_local_stop, "STOP ") \ EM(rxrpc_local_stopped, "STOPPED ") \ @@ -78,11 +70,9 @@ EM(rxrpc_local_unuse_conn_work, "UNU conn-wrk") \ EM(rxrpc_local_unuse_peer_keepalive, "UNU peer-kpa") \ EM(rxrpc_local_unuse_release_sock, "UNU rel-sock") \ - EM(rxrpc_local_unuse_work, "UNU work ") \ EM(rxrpc_local_use_conn_work, "USE conn-wrk") \ EM(rxrpc_local_use_lookup, "USE lookup ") \ - EM(rxrpc_local_use_peer_keepalive, "USE peer-kpa") \ - E_(rxrpc_local_use_work, "USE work ") + E_(rxrpc_local_use_peer_keepalive, "USE peer-kpa") #define rxrpc_peer_traces \ EM(rxrpc_peer_free, "FREE ") \ @@ -90,6 +80,7 @@ EM(rxrpc_peer_get_activate_call, "GET act-call") \ EM(rxrpc_peer_get_bundle, "GET bundle ") \ EM(rxrpc_peer_get_client_conn, "GET cln-conn") \ + EM(rxrpc_peer_get_input, "GET input ") \ EM(rxrpc_peer_get_input_error, "GET inpt-err") \ EM(rxrpc_peer_get_keepalive, "GET keepaliv") \ EM(rxrpc_peer_get_lookup_client, "GET look-cln") \ @@ -100,6 +91,7 @@ EM(rxrpc_peer_put_call, "PUT call ") \ EM(rxrpc_peer_put_conn, "PUT conn ") \ EM(rxrpc_peer_put_discard_tmp, "PUT disc-tmp") \ + EM(rxrpc_peer_put_input, "PUT input ") \ EM(rxrpc_peer_put_input_error, "PUT inpt-err") \ E_(rxrpc_peer_put_keepalive, "PUT keepaliv") @@ -180,11 +172,6 @@ EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \ EM(rxrpc_call_put_unnotify, "PUT unnotify") \ EM(rxrpc_call_put_userid_exists, "PUT u-exists") \ - EM(rxrpc_call_queue_abort, "QUE abort ") \ - EM(rxrpc_call_queue_requeue, "QUE requeue ") \ - EM(rxrpc_call_queue_resend, "QUE resend ") \ - EM(rxrpc_call_queue_timer, "QUE timer ") \ - EM(rxrpc_call_queue_tx_data, "QUE tx-data ") \ EM(rxrpc_call_see_accept, "SEE accept ") \ EM(rxrpc_call_see_activate_client, "SEE act-clnt") \ EM(rxrpc_call_see_connect_failed, "SEE con-fail") \ @@ -282,6 +269,7 @@ EM(rxrpc_propose_ack_respond_to_ping, "Rsp2Png") \ EM(rxrpc_propose_ack_retry_tx, "RetryTx") \ EM(rxrpc_propose_ack_rotate_rx, "RxAck ") \ + EM(rxrpc_propose_ack_rx_idle, "RxIdle ") \ E_(rxrpc_propose_ack_terminal_ack, "ClTerm ") #define rxrpc_congest_modes \ @@ -1532,6 +1520,7 @@ TRACE_EVENT(rxrpc_connect_call, __field(unsigned long, user_call_ID ) __field(u32, cid ) __field(u32, call_id ) + __field_struct(struct sockaddr_rxrpc, srx ) ), TP_fast_assign( @@ -1539,33 +1528,42 @@ TRACE_EVENT(rxrpc_connect_call, __entry->user_call_ID = call->user_call_ID; __entry->cid = call->cid; __entry->call_id = call->call_id; + __entry->srx = call->dest_srx; ), - TP_printk("c=%08x u=%p %08x:%08x", + TP_printk("c=%08x u=%p %08x:%08x dst=%pISp", __entry->call, (void *)__entry->user_call_ID, __entry->cid, - __entry->call_id) + __entry->call_id, + &__entry->srx.transport) ); TRACE_EVENT(rxrpc_resend, - TP_PROTO(struct rxrpc_call *call), + TP_PROTO(struct rxrpc_call *call, struct sk_buff *ack), - TP_ARGS(call), + TP_ARGS(call, ack), TP_STRUCT__entry( __field(unsigned int, call ) __field(rxrpc_seq_t, seq ) + __field(rxrpc_seq_t, transmitted ) + __field(rxrpc_serial_t, ack_serial ) ), TP_fast_assign( + struct rxrpc_skb_priv *sp = ack ? rxrpc_skb(ack) : NULL; __entry->call = call->debug_id; __entry->seq = call->acks_hard_ack; + __entry->transmitted = call->tx_transmitted; + __entry->ack_serial = sp ? sp->hdr.serial : 0; ), - TP_printk("c=%08x q=%x", + TP_printk("c=%08x r=%x q=%x tq=%x", __entry->call, - __entry->seq) + __entry->ack_serial, + __entry->seq, + __entry->transmitted) ); TRACE_EVENT(rxrpc_rx_icmp, diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index c3c915a05627..6b993a3d4186 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -283,14 +283,11 @@ struct rxrpc_local { struct rxrpc_net *rxnet; /* The network ns in which this resides */ struct hlist_node link; struct socket *socket; /* my UDP socket */ - struct work_struct processor; struct task_struct *io_thread; struct list_head ack_tx_queue; /* List of ACKs that need sending */ spinlock_t ack_tx_lock; /* ACK list lock */ struct rxrpc_sock __rcu *service; /* Service(s) listening on this endpoint */ struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */ - struct sk_buff_head reject_queue; /* packets awaiting rejection */ - struct sk_buff_head event_queue; /* endpoint event packets awaiting processing */ struct sk_buff_head rx_queue; /* Received packets */ struct list_head call_attend_q; /* Calls requiring immediate attention */ struct rb_root client_bundles; /* Client connection bundles by socket params */ @@ -524,23 +521,19 @@ enum rxrpc_call_flag { RXRPC_CALL_RETRANS_TIMEOUT, /* Retransmission due to timeout occurred */ RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */ RXRPC_CALL_RX_HEARD, /* The peer responded at least once to this call */ - RXRPC_CALL_RX_UNDERRUN, /* Got data underrun */ RXRPC_CALL_DISCONNECTED, /* The call has been disconnected */ RXRPC_CALL_KERNEL, /* The call was made by the kernel */ RXRPC_CALL_UPGRADE, /* Service upgrade was requested for the call */ - RXRPC_CALL_DELAY_ACK_PENDING, /* DELAY ACK generation is pending */ - RXRPC_CALL_IDLE_ACK_PENDING, /* IDLE ACK generation is pending */ RXRPC_CALL_EXCLUSIVE, /* The call uses a once-only connection */ + RXRPC_CALL_RX_IS_IDLE, /* Reception is idle - send an ACK */ }; /* * Events that can be raised on a call. */ enum rxrpc_call_event { - RXRPC_CALL_EV_ABORT, /* need to generate abort */ - RXRPC_CALL_EV_RESEND, /* Tx resend required */ - RXRPC_CALL_EV_EXPIRED, /* Expiry occurred */ RXRPC_CALL_EV_ACK_LOST, /* ACK may be lost, send ping */ + RXRPC_CALL_EV_INITIAL_PING, /* Send initial ping for a new service call */ }; /* @@ -611,7 +604,6 @@ struct rxrpc_call { u32 next_rx_timo; /* Timeout for next Rx packet (jif) */ u32 next_req_timo; /* Timeout for next Rx request packet (jif) */ struct timer_list timer; /* Combined event timer */ - struct work_struct processor; /* Event processor */ struct work_struct destroyer; /* In-process-context destroyer */ rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ struct list_head link; /* link in master call list */ @@ -705,11 +697,7 @@ struct rxrpc_call { rxrpc_seq_t acks_prev_seq; /* Highest previousPacket received */ rxrpc_seq_t acks_hard_ack; /* Latest hard-ack point */ rxrpc_seq_t acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */ - rxrpc_seq_t acks_lost_top; /* tx_top at the time lost-ack ping sent */ - rxrpc_serial_t acks_lost_ping; /* Serial number of probe ACK */ rxrpc_serial_t acks_highest_serial; /* Highest serial number ACK'd */ - struct sk_buff *acks_soft_tbl; /* The last ACK packet with NAKs in it */ - spinlock_t acks_ack_lock; /* Access to ->acks_last_ack */ }; /* @@ -822,10 +810,9 @@ extern struct workqueue_struct *rxrpc_workqueue; */ int rxrpc_service_prealloc(struct rxrpc_sock *, gfp_t); void rxrpc_discard_prealloc(struct rxrpc_sock *); -struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *, - struct rxrpc_sock *, - struct sockaddr_rxrpc *, - struct sk_buff *); +bool rxrpc_new_incoming_call(struct rxrpc_local *, struct rxrpc_peer *, + struct rxrpc_connection *, struct sockaddr_rxrpc *, + struct sk_buff *); void rxrpc_accept_incoming_calls(struct rxrpc_local *); int rxrpc_user_charge_accept(struct rxrpc_sock *, unsigned long); @@ -838,13 +825,15 @@ void rxrpc_send_ACK(struct rxrpc_call *, u8, rxrpc_serial_t, enum rxrpc_propose_ void rxrpc_propose_delay_ACK(struct rxrpc_call *, rxrpc_serial_t, enum rxrpc_propose_ack_trace); void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *); -void rxrpc_process_call(struct work_struct *); +void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb); void rxrpc_reduce_call_timer(struct rxrpc_call *call, unsigned long expire_at, unsigned long now, enum rxrpc_timer_trace why); +void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb); + /* * call_object.c */ @@ -864,9 +853,8 @@ void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *, struct sk_buff *); void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *); void rxrpc_release_calls_on_socket(struct rxrpc_sock *); -void rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_see_call(struct rxrpc_call *, enum rxrpc_call_trace); -bool rxrpc_try_get_call(struct rxrpc_call *, enum rxrpc_call_trace); +struct rxrpc_call *rxrpc_try_get_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_cleanup_call(struct rxrpc_call *); @@ -908,6 +896,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *); */ void rxrpc_process_connection(struct work_struct *); void rxrpc_process_delayed_final_acks(struct rxrpc_connection *, bool); +int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb); /* * conn_object.c @@ -916,10 +905,9 @@ extern unsigned int rxrpc_connection_expiry; extern unsigned int rxrpc_closed_conn_expiry; struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *, gfp_t); -struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *, - struct sockaddr_rxrpc *, - struct sk_buff *, - struct rxrpc_peer **); +struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *, + struct sockaddr_rxrpc *, + struct sk_buff *); void __rxrpc_disconnect_call(struct rxrpc_connection *, struct rxrpc_call *); void rxrpc_disconnect_call(struct rxrpc_call *); void rxrpc_kill_client_conn(struct rxrpc_connection *); @@ -962,8 +950,8 @@ void rxrpc_unpublish_service_conn(struct rxrpc_connection *); /* * input.c */ -void rxrpc_input_call_event(struct rxrpc_call *, struct sk_buff *); -void rxrpc_input_implicit_end_call(struct rxrpc_connection *, struct rxrpc_call *); +void rxrpc_input_call_packet(struct rxrpc_call *, struct sk_buff *); +void rxrpc_implicit_end_call(struct rxrpc_call *, struct sk_buff *); /* * io_thread.c @@ -993,7 +981,9 @@ int rxrpc_get_server_data_key(struct rxrpc_connection *, const void *, time64_t, /* * local_event.c */ -extern void rxrpc_process_local_events(struct rxrpc_local *); +void rxrpc_send_version_request(struct rxrpc_local *local, + struct rxrpc_host_header *hdr, + struct sk_buff *skb); /* * local_object.c @@ -1004,7 +994,6 @@ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *, enum rxrpc_local void rxrpc_put_local(struct rxrpc_local *, enum rxrpc_local_trace); struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *, enum rxrpc_local_trace); void rxrpc_unuse_local(struct rxrpc_local *, enum rxrpc_local_trace); -void rxrpc_queue_local(struct rxrpc_local *); void rxrpc_destroy_local(struct rxrpc_local *local); void rxrpc_destroy_all_locals(struct rxrpc_net *); @@ -1068,7 +1057,7 @@ static inline struct rxrpc_net *rxrpc_net(struct net *net) void rxrpc_transmit_ack_packets(struct rxrpc_local *); int rxrpc_send_abort_packet(struct rxrpc_call *); int rxrpc_send_data_packet(struct rxrpc_call *, struct rxrpc_txbuf *); -void rxrpc_reject_packets(struct rxrpc_local *); +void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb); void rxrpc_send_keepalive(struct rxrpc_peer *); void rxrpc_transmit_one(struct rxrpc_call *call, struct rxrpc_txbuf *txb); @@ -1178,7 +1167,6 @@ int rxrpc_server_keyring(struct rxrpc_sock *, sockptr_t, int); * skbuff.c */ void rxrpc_kernel_data_consumed(struct rxrpc_call *, struct sk_buff *); -void rxrpc_packet_destructor(struct sk_buff *); void rxrpc_new_skb(struct sk_buff *, enum rxrpc_skb_trace); void rxrpc_see_skb(struct sk_buff *, enum rxrpc_skb_trace); void rxrpc_eaten_skb(struct sk_buff *, enum rxrpc_skb_trace); diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 11134b7cec17..86a4187fb2fb 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -100,6 +100,7 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, return -ENOMEM; call->flags |= (1 << RXRPC_CALL_IS_SERVICE); call->state = RXRPC_CALL_SERVER_PREALLOC; + __set_bit(RXRPC_CALL_EV_INITIAL_PING, &call->events); trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), user_call_ID, rxrpc_call_new_prealloc_service); @@ -234,21 +235,6 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx) kfree(b); } -/* - * Ping the other end to fill our RTT cache and to retrieve the rwind - * and MTU parameters. - */ -static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb) -{ - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - ktime_t now = skb->tstamp; - - if (call->peer->rtt_count < 3 || - ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now)) - rxrpc_send_ACK(call, RXRPC_ACK_PING, sp->hdr.serial, - rxrpc_propose_ack_ping_for_params); -} - /* * Allocate a new incoming call from the prealloc pool, along with a connection * and a peer as necessary. @@ -330,33 +316,56 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, } /* - * Set up a new incoming call. Called in BH context with the RCU read lock - * held. + * Set up a new incoming call. Called from the I/O thread. * * If this is for a kernel service, when we allocate the call, it will have * three refs on it: (1) the kernel service, (2) the user_call_ID tree, (3) the * retainer ref obtained from the backlog buffer. Prealloc calls for userspace - * services only have the ref from the backlog buffer. We pass this ref to the - * caller. + * services only have the ref from the backlog buffer. * * If we want to report an error, we mark the skb with the packet type and - * abort code and return NULL. - * - * The call is returned with the user access mutex held and a ref on it. + * abort code and return false. */ -struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, - struct rxrpc_sock *rx, - struct sockaddr_rxrpc *peer_srx, - struct sk_buff *skb) +bool rxrpc_new_incoming_call(struct rxrpc_local *local, + struct rxrpc_peer *peer, + struct rxrpc_connection *conn, + struct sockaddr_rxrpc *peer_srx, + struct sk_buff *skb) { - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); const struct rxrpc_security *sec = NULL; - struct rxrpc_connection *conn; - struct rxrpc_peer *peer = NULL; + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_call *call = NULL; + struct rxrpc_sock *rx; _enter(""); + /* Don't set up a call for anything other than the first DATA packet. */ + if (sp->hdr.seq != 1 || + sp->hdr.type != RXRPC_PACKET_TYPE_DATA) + return 0; + + rcu_read_lock(); + + /* Weed out packets to services we're not offering. Packets that would + * begin a call are explicitly rejected and the rest are just + * discarded. + */ + rx = rcu_dereference(local->service); + if (!rx || (sp->hdr.serviceId != rx->srx.srx_service && + sp->hdr.serviceId != rx->second_service) + ) { + if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA && + sp->hdr.seq == 1) + goto unsupported_service; + goto discard; + } + + if (!conn) { + sec = rxrpc_get_incoming_security(rx, skb); + if (!sec) + goto no_call; + } + spin_lock(&rx->incoming_lock); if (rx->sk.sk_state == RXRPC_SERVER_LISTEN_DISABLED || rx->sk.sk_state == RXRPC_CLOSE) { @@ -367,19 +376,6 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, goto no_call; } - /* The peer, connection and call may all have sprung into existence due - * to a duplicate packet being handled on another CPU in parallel, so - * we have to recheck the routing. However, we're now holding - * rx->incoming_lock, so the values should remain stable. - */ - conn = rxrpc_find_connection_rcu(local, peer_srx, skb, &peer); - - if (!conn) { - sec = rxrpc_get_incoming_security(rx, skb); - if (!sec) - goto no_call; - } - call = rxrpc_alloc_incoming_call(rx, local, peer, conn, sec, peer_srx, skb); if (!call) { @@ -398,35 +394,15 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, rx->notify_new_call(&rx->sk, call, call->user_call_ID); spin_lock(&conn->state_lock); - switch (conn->state) { - case RXRPC_CONN_SERVICE_UNSECURED: + if (conn->state == RXRPC_CONN_SERVICE_UNSECURED) { conn->state = RXRPC_CONN_SERVICE_CHALLENGING; set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events); rxrpc_queue_conn(call->conn, rxrpc_conn_queue_challenge); - break; - - case RXRPC_CONN_SERVICE: - write_lock(&call->state_lock); - if (call->state < RXRPC_CALL_COMPLETE) - call->state = RXRPC_CALL_SERVER_RECV_REQUEST; - write_unlock(&call->state_lock); - break; - - case RXRPC_CONN_REMOTELY_ABORTED: - rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, - conn->abort_code, conn->error); - break; - case RXRPC_CONN_LOCALLY_ABORTED: - rxrpc_abort_call("CON", call, sp->hdr.seq, - conn->abort_code, conn->error); - break; - default: - BUG(); } spin_unlock(&conn->state_lock); - spin_unlock(&rx->incoming_lock); - rxrpc_send_ping(call, skb); + spin_unlock(&rx->incoming_lock); + rcu_read_unlock(); if (hlist_unhashed(&call->error_link)) { spin_lock(&call->peer->lock); @@ -435,12 +411,24 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, } _leave(" = %p{%d}", call, call->debug_id); - return call; - + rxrpc_input_call_event(call, skb); + rxrpc_put_call(call, rxrpc_call_put_input); + return true; + +unsupported_service: + trace_rxrpc_abort(0, "INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, + RX_INVALID_OPERATION, EOPNOTSUPP); + skb->priority = RX_INVALID_OPERATION; + goto reject; no_call: spin_unlock(&rx->incoming_lock); - _leave(" = NULL [%u]", skb->mark); - return NULL; +reject: + rcu_read_unlock(); + _leave(" = f [%u]", skb->mark); + return false; +discard: + rcu_read_lock(); + return true; } /* diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index c9f835292f7b..9db62fa55c62 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -74,11 +74,6 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) return; - if (ack_reason == RXRPC_ACK_DELAY && - test_and_set_bit(RXRPC_CALL_DELAY_ACK_PENDING, &call->flags)) { - trace_rxrpc_drop_ack(call, why, ack_reason, serial, false); - return; - } rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]); @@ -111,12 +106,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, spin_unlock_bh(&local->ack_tx_lock); trace_rxrpc_send_ack(call, why, ack_reason, serial); - if (!rcu_read_lock_held()) { - rxrpc_transmit_ack_packets(call->peer->local); - } else { - rxrpc_get_local(local, rxrpc_local_get_queue); - rxrpc_queue_local(local); - } + rxrpc_wake_up_io_thread(local); } /* @@ -130,11 +120,10 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call) /* * Perform retransmission of NAK'd and unack'd packets. */ -static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) +void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) { struct rxrpc_ackpacket *ack = NULL; struct rxrpc_txbuf *txb; - struct sk_buff *ack_skb = NULL; unsigned long resend_at; rxrpc_seq_t transmitted = READ_ONCE(call->tx_transmitted); ktime_t now, max_age, oldest, ack_ts; @@ -148,32 +137,21 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) max_age = ktime_sub_us(now, jiffies_to_usecs(call->peer->rto_j)); oldest = now; - /* See if there's an ACK saved with a soft-ACK table in it. */ - if (call->acks_soft_tbl) { - spin_lock_bh(&call->acks_ack_lock); - ack_skb = call->acks_soft_tbl; - if (ack_skb) { - rxrpc_get_skb(ack_skb, rxrpc_skb_get_ack); - ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header); - } - spin_unlock_bh(&call->acks_ack_lock); - } - if (list_empty(&call->tx_buffer)) goto no_resend; - spin_lock(&call->tx_lock); - if (list_empty(&call->tx_buffer)) goto no_further_resend; - trace_rxrpc_resend(call); + trace_rxrpc_resend(call, ack_skb); txb = list_first_entry(&call->tx_buffer, struct rxrpc_txbuf, call_link); /* Scan the soft ACK table without dropping the lock and resend any * explicitly NAK'd packets. */ - if (ack) { + if (ack_skb) { + ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header); + for (i = 0; i < ack->nAcks; i++) { rxrpc_seq_t seq; @@ -197,7 +175,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) rxrpc_see_txbuf(txb, rxrpc_txbuf_see_unacked); if (list_empty(&txb->tx_link)) { - rxrpc_get_txbuf(txb, rxrpc_txbuf_get_retrans); list_add_tail(&txb->tx_link, &retrans_queue); set_bit(RXRPC_TXBUF_RESENT, &txb->flags); } @@ -241,7 +218,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) do_resend: unacked = true; if (list_empty(&txb->tx_link)) { - rxrpc_get_txbuf(txb, rxrpc_txbuf_get_retrans); list_add_tail(&txb->tx_link, &retrans_queue); set_bit(RXRPC_TXBUF_RESENT, &txb->flags); rxrpc_inc_stat(call->rxnet, stat_tx_data_retrans); @@ -249,10 +225,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) } no_further_resend: - spin_unlock(&call->tx_lock); no_resend: - rxrpc_free_skb(ack_skb, rxrpc_skb_put_ack); - resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest))); resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, !list_empty(&retrans_queue)); @@ -266,7 +239,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) * retransmitting data. */ if (list_empty(&retrans_queue)) { - rxrpc_reduce_call_timer(call, resend_at, now_j, + rxrpc_reduce_call_timer(call, resend_at, jiffies, rxrpc_timer_set_for_resend); ack_ts = ktime_sub(now, call->acks_latest_ts); if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3)) @@ -276,15 +249,11 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) goto out; } + /* Retransmit the queue */ while ((txb = list_first_entry_or_null(&retrans_queue, struct rxrpc_txbuf, tx_link))) { list_del_init(&txb->tx_link); - rxrpc_send_data_packet(call, txb); - rxrpc_put_txbuf(txb, rxrpc_txbuf_put_trans); - - trace_rxrpc_retransmit(call, txb->seq, - ktime_to_ns(ktime_sub(txb->last_sent, - max_age))); + rxrpc_transmit_one(call, txb); } out: @@ -357,16 +326,27 @@ static void rxrpc_transmit_some_data(struct rxrpc_call *call) } } +/* + * Ping the other end to fill our RTT cache and to retrieve the rwind + * and MTU parameters. + */ +static void rxrpc_send_initial_ping(struct rxrpc_call *call) +{ + if (call->peer->rtt_count < 3 || + ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), + ktime_get_real())) + rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, + rxrpc_propose_ack_ping_for_params); +} + /* * Handle retransmission and deferred ACK/abort generation. */ -void rxrpc_process_call(struct work_struct *work) +void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) { - struct rxrpc_call *call = - container_of(work, struct rxrpc_call, processor); unsigned long now, next, t; - unsigned int iterations = 0; rxrpc_serial_t ackr_serial; + bool resend = false, expired = false; rxrpc_see_call(call, rxrpc_call_see_input); @@ -374,47 +354,31 @@ void rxrpc_process_call(struct work_struct *work) _enter("{%d,%s,%lx}", call->debug_id, rxrpc_call_states[call->state], call->events); -recheck_state: - if (call->acks_hard_ack != call->tx_bottom) - rxrpc_shrink_call_tx_buffer(call); - - /* Limit the number of times we do this before returning to the manager */ - if (!rxrpc_tx_window_has_space(call) || - list_empty(&call->tx_sendmsg)) { - iterations++; - if (iterations > 5) - goto requeue; - } - - if (test_and_clear_bit(RXRPC_CALL_EV_ABORT, &call->events)) { - rxrpc_send_abort_packet(call); - goto recheck_state; - } + if (call->state == RXRPC_CALL_COMPLETE) + goto out; - if (call->state == RXRPC_CALL_COMPLETE) { - del_timer_sync(&call->timer); + if (skb && skb->mark == RXRPC_SKB_MARK_ERROR) goto out; - } - /* Work out if any timeouts tripped */ + /* If we see our async-event poke, check for timeout trippage. */ now = jiffies; t = READ_ONCE(call->expect_rx_by); if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_normal, now); - set_bit(RXRPC_CALL_EV_EXPIRED, &call->events); + expired = true; } t = READ_ONCE(call->expect_req_by); if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST && time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_idle, now); - set_bit(RXRPC_CALL_EV_EXPIRED, &call->events); + expired = true; } t = READ_ONCE(call->expect_term_by); if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_hard, now); - set_bit(RXRPC_CALL_EV_EXPIRED, &call->events); + expired = true; } t = READ_ONCE(call->delay_ack_at); @@ -453,13 +417,19 @@ void rxrpc_process_call(struct work_struct *work) if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_resend, now); cmpxchg(&call->resend_at, t, now + MAX_JIFFY_OFFSET); - set_bit(RXRPC_CALL_EV_RESEND, &call->events); + resend = true; } + if (skb) + rxrpc_input_call_packet(call, skb); + rxrpc_transmit_some_data(call); + if (test_and_clear_bit(RXRPC_CALL_EV_INITIAL_PING, &call->events)) + rxrpc_send_initial_ping(call); + /* Process events */ - if (test_and_clear_bit(RXRPC_CALL_EV_EXPIRED, &call->events)) { + if (expired) { if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) && (int)call->conn->hi_serial - (int)call->rx_serial > 0) { trace_rxrpc_call_reset(call); @@ -467,51 +437,50 @@ void rxrpc_process_call(struct work_struct *work) } else { rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME); } - set_bit(RXRPC_CALL_EV_ABORT, &call->events); - goto recheck_state; + rxrpc_send_abort_packet(call); + goto out; } - if (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events)) { - call->acks_lost_top = call->tx_top; + if (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events)) rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_lost_ack); - } - if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events) && - call->state != RXRPC_CALL_CLIENT_RECV_REPLY) { - rxrpc_resend(call, now); - goto recheck_state; - } + if (resend && call->state != RXRPC_CALL_CLIENT_RECV_REPLY) + rxrpc_resend(call, NULL); + + if (test_and_clear_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags)) + rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0, + rxrpc_propose_ack_rx_idle); + + if (atomic_read(&call->ackr_nr_unacked) > 2) + rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0, + rxrpc_propose_ack_input_data); /* Make sure the timer is restarted */ - next = call->expect_rx_by; + if (call->state != RXRPC_CALL_COMPLETE) { + next = call->expect_rx_by; #define set(T) { t = READ_ONCE(T); if (time_before(t, next)) next = t; } - set(call->expect_req_by); - set(call->expect_term_by); - set(call->delay_ack_at); - set(call->ack_lost_at); - set(call->resend_at); - set(call->keepalive_at); - set(call->ping_at); - - now = jiffies; - if (time_after_eq(now, next)) - goto recheck_state; + set(call->expect_req_by); + set(call->expect_term_by); + set(call->delay_ack_at); + set(call->ack_lost_at); + set(call->resend_at); + set(call->keepalive_at); + set(call->ping_at); - rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart); + now = jiffies; + if (time_after_eq(now, next)) + rxrpc_poke_call(call, rxrpc_call_poke_timer_now); - /* other events may have been raised since we started checking */ - if (call->events) - goto requeue; + rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart); + } out: + if (call->state == RXRPC_CALL_COMPLETE) + del_timer_sync(&call->timer); + if (call->acks_hard_ack != call->tx_bottom) + rxrpc_shrink_call_tx_buffer(call); _leave(""); - return; - -requeue: - if (call->state < RXRPC_CALL_COMPLETE) - rxrpc_queue_call(call, rxrpc_call_queue_requeue); - goto out; } diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 7570b4e67bc5..d441a715d988 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -71,7 +71,7 @@ static void rxrpc_call_timer_expired(struct timer_list *t) if (call->state < RXRPC_CALL_COMPLETE) { trace_rxrpc_timer_expired(call, jiffies); - rxrpc_queue_call(call, rxrpc_call_queue_timer); + rxrpc_poke_call(call, rxrpc_call_poke_timer); } } @@ -148,7 +148,6 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, &rxrpc_call_user_mutex_lock_class_key); timer_setup(&call->timer, rxrpc_call_timer_expired, 0); - INIT_WORK(&call->processor, rxrpc_process_call); INIT_WORK(&call->destroyer, rxrpc_destroy_call); INIT_LIST_HEAD(&call->link); INIT_LIST_HEAD(&call->chan_wait_link); @@ -163,7 +162,6 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, init_waitqueue_head(&call->waitq); spin_lock_init(&call->notify_lock); spin_lock_init(&call->tx_lock); - spin_lock_init(&call->acks_ack_lock); rwlock_init(&call->state_lock); refcount_set(&call->ref, 1); call->debug_id = debug_id; @@ -252,6 +250,7 @@ static void rxrpc_start_call_timer(struct rxrpc_call *call) call->ack_lost_at = j; call->resend_at = j; call->ping_at = j; + call->keepalive_at = j; call->expect_rx_by = j; call->expect_req_by = j; call->expect_term_by = j; @@ -430,6 +429,29 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, call->state = RXRPC_CALL_SERVER_SECURING; call->cong_tstamp = skb->tstamp; + spin_lock(&conn->state_lock); + + switch (conn->state) { + case RXRPC_CONN_SERVICE_UNSECURED: + case RXRPC_CONN_SERVICE_CHALLENGING: + call->state = RXRPC_CALL_SERVER_SECURING; + break; + case RXRPC_CONN_SERVICE: + call->state = RXRPC_CALL_SERVER_RECV_REQUEST; + break; + + case RXRPC_CONN_REMOTELY_ABORTED: + __rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, + conn->abort_code, conn->error); + break; + case RXRPC_CONN_LOCALLY_ABORTED: + __rxrpc_abort_call("CON", call, 1, + conn->abort_code, conn->error); + break; + default: + BUG(); + } + /* Set the channel for this call. We don't get channel_lock as we're * only defending against the data_ready handler (which we're called * from) and the RESPONSE packet parser (which is only really @@ -440,6 +462,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, conn->channels[chan].call_counter = call->call_id; conn->channels[chan].call_id = call->call_id; rcu_assign_pointer(conn->channels[chan].call, call); + spin_unlock(&conn->state_lock); spin_lock(&conn->peer->lock); hlist_add_head(&call->error_link, &conn->peer->error_targets); @@ -449,15 +472,6 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, _leave(""); } -/* - * Queue a call's work processor. - */ -void rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why) -{ - if (rxrpc_queue_work(&call->processor)) - trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), 0, why); -} - /* * Note the re-emergence of a call. */ @@ -470,14 +484,15 @@ void rxrpc_see_call(struct rxrpc_call *call, enum rxrpc_call_trace why) } } -bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace why) +struct rxrpc_call *rxrpc_try_get_call(struct rxrpc_call *call, + enum rxrpc_call_trace why) { int r; - if (!__refcount_inc_not_zero(&call->ref, &r)) - return false; + if (!call || !__refcount_inc_not_zero(&call->ref, &r)) + return NULL; trace_rxrpc_call(call->debug_id, r + 1, 0, why); - return true; + return call; } /* @@ -637,8 +652,6 @@ static void rxrpc_destroy_call(struct work_struct *work) struct rxrpc_call *call = container_of(work, struct rxrpc_call, destroyer); struct rxrpc_txbuf *txb; - del_timer_sync(&call->timer); - cancel_work_sync(&call->processor); /* The processor may restart the timer */ del_timer_sync(&call->timer); rxrpc_cleanup_ring(call); @@ -652,8 +665,8 @@ static void rxrpc_destroy_call(struct work_struct *work) list_del(&txb->call_link); rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned); } + rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned); - rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_put_ack); rxrpc_put_connection(call->conn, rxrpc_conn_put_call); rxrpc_put_peer(call->peer, rxrpc_peer_put_call); rxrpc_put_local(call->local, rxrpc_local_put_call); @@ -670,10 +683,9 @@ void rxrpc_cleanup_call(struct rxrpc_call *call) ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags)); - del_timer_sync(&call->timer); - cancel_work(&call->processor); + del_timer(&call->timer); - if (rcu_read_lock_held() || work_busy(&call->processor)) + if (rcu_read_lock_held()) /* Can't use the rxrpc workqueue as we need to cancel/flush * something that may be running/waiting there. */ diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 23a74e35052d..643a56322224 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -479,3 +479,63 @@ void rxrpc_process_connection(struct work_struct *work) rxrpc_unuse_local(conn->local, rxrpc_local_unuse_conn_work); } } + +/* + * post connection-level events to the connection + * - this includes challenges, responses, some aborts and call terminal packet + * retransmission. + */ +static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, + struct sk_buff *skb) +{ + _enter("%p,%p", conn, skb); + + rxrpc_get_skb(skb, rxrpc_skb_get_conn_work); + skb_queue_tail(&conn->rx_queue, skb); + rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work); +} + +/* + * Input a connection-level packet. + */ +int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) +{ + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + + if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { + _leave(" = -ECONNABORTED [%u]", conn->state); + return -ECONNABORTED; + } + + _enter("{%d},{%u,%%%u},", conn->debug_id, sp->hdr.type, sp->hdr.serial); + + switch (sp->hdr.type) { + case RXRPC_PACKET_TYPE_DATA: + case RXRPC_PACKET_TYPE_ACK: + rxrpc_conn_retransmit_call(conn, skb, + sp->hdr.cid & RXRPC_CHANNELMASK); + return 0; + + case RXRPC_PACKET_TYPE_BUSY: + /* Just ignore BUSY packets for now. */ + return 0; + + case RXRPC_PACKET_TYPE_ABORT: + conn->error = -ECONNABORTED; + conn->abort_code = skb->priority; + conn->state = RXRPC_CONN_REMOTELY_ABORTED; + set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); + rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial); + return -ECONNABORTED; + + case RXRPC_PACKET_TYPE_CHALLENGE: + case RXRPC_PACKET_TYPE_RESPONSE: + rxrpc_post_packet_to_conn(conn, skb); + return 0; + + default: + trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, + tracepoint_string("bad_conn_pkt")); + return -EPROTO; + } +} diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 98e49646ca1d..3c8f83dacb2b 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -72,76 +72,55 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet, * * The caller must be holding the RCU read lock. */ -struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local, - struct sockaddr_rxrpc *srx, - struct sk_buff *skb, - struct rxrpc_peer **_peer) +struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *local, + struct sockaddr_rxrpc *srx, + struct sk_buff *skb) { struct rxrpc_connection *conn; - struct rxrpc_conn_proto k; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_peer *peer; _enter(",%x", sp->hdr.cid & RXRPC_CIDMASK); - k.epoch = sp->hdr.epoch; - k.cid = sp->hdr.cid & RXRPC_CIDMASK; - - if (rxrpc_to_server(sp)) { - /* We need to look up service connections by the full protocol - * parameter set. We look up the peer first as an intermediate - * step and then the connection from the peer's tree. - */ - peer = rxrpc_lookup_peer_rcu(local, srx); - if (!peer) - goto not_found; - *_peer = peer; - conn = rxrpc_find_service_conn_rcu(peer, skb); - if (!conn || refcount_read(&conn->ref) == 0) - goto not_found; - _leave(" = %p", conn); - return conn; - } else { - /* Look up client connections by connection ID alone as their - * IDs are unique for this machine. - */ - conn = idr_find(&rxrpc_client_conn_ids, sp->hdr.cid >> RXRPC_CIDSHIFT); - if (!conn || refcount_read(&conn->ref) == 0) { - _debug("no conn"); - goto not_found; - } + /* Look up client connections by connection ID alone as their IDs are + * unique for this machine. + */ + conn = idr_find(&rxrpc_client_conn_ids, sp->hdr.cid >> RXRPC_CIDSHIFT); + if (!conn || refcount_read(&conn->ref) == 0) { + _debug("no conn"); + goto not_found; + } - if (conn->proto.epoch != k.epoch || - conn->local != local) + if (conn->proto.epoch != sp->hdr.epoch || + conn->local != local) + goto not_found; + + peer = conn->peer; + switch (srx->transport.family) { + case AF_INET: + if (peer->srx.transport.sin.sin_port != + srx->transport.sin.sin_port || + peer->srx.transport.sin.sin_addr.s_addr != + srx->transport.sin.sin_addr.s_addr) goto not_found; - - peer = conn->peer; - switch (srx->transport.family) { - case AF_INET: - if (peer->srx.transport.sin.sin_port != - srx->transport.sin.sin_port || - peer->srx.transport.sin.sin_addr.s_addr != - srx->transport.sin.sin_addr.s_addr) - goto not_found; - break; + break; #ifdef CONFIG_AF_RXRPC_IPV6 - case AF_INET6: - if (peer->srx.transport.sin6.sin6_port != - srx->transport.sin6.sin6_port || - memcmp(&peer->srx.transport.sin6.sin6_addr, - &srx->transport.sin6.sin6_addr, - sizeof(struct in6_addr)) != 0) - goto not_found; - break; + case AF_INET6: + if (peer->srx.transport.sin6.sin6_port != + srx->transport.sin6.sin6_port || + memcmp(&peer->srx.transport.sin6.sin6_addr, + &srx->transport.sin6.sin6_addr, + sizeof(struct in6_addr)) != 0) + goto not_found; + break; #endif - default: - BUG(); - } - - _leave(" = %p", conn); - return conn; + default: + BUG(); } + _leave(" = %p", conn); + return conn; + not_found: _leave(" = NULL"); return NULL; diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 01d32f817a7a..7ae7046f0b03 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -12,10 +12,8 @@ static void rxrpc_proto_abort(const char *why, struct rxrpc_call *call, rxrpc_seq_t seq) { - if (rxrpc_abort_call(why, call, seq, RX_PROTOCOL_ERROR, -EBADMSG)) { - set_bit(RXRPC_CALL_EV_ABORT, &call->events); - rxrpc_queue_call(call, rxrpc_call_queue_abort); - } + if (rxrpc_abort_call(why, call, seq, RX_PROTOCOL_ERROR, -EBADMSG)) + rxrpc_send_abort_packet(call); } /* @@ -174,8 +172,8 @@ static void rxrpc_congestion_management(struct rxrpc_call *call, call->cong_cwnd = cwnd; call->cong_cumul_acks = cumulative_acks; trace_rxrpc_congest(call, summary, acked_serial, change); - if (resend && !test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events)) - rxrpc_queue_call(call, rxrpc_call_queue_resend); + if (resend) + rxrpc_resend(call, skb); return; packet_loss_detected: @@ -398,6 +396,8 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, /* Send an immediate ACK if we fill in a hole */ else if (!skb_queue_empty(&call->rx_oos_queue)) ack_reason = RXRPC_ACK_DELAY; + else + atomic_inc_return(&call->ackr_nr_unacked); window++; if (after(window, wtop)) @@ -473,14 +473,6 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, } send_ack: - if (ack_reason < 0 && - atomic_inc_return(&call->ackr_nr_unacked) > 2 && - test_and_set_bit(RXRPC_CALL_IDLE_ACK_PENDING, &call->flags)) { - ack_reason = RXRPC_ACK_IDLE; - } else if (ack_reason >= 0) { - set_bit(RXRPC_CALL_IDLE_ACK_PENDING, &call->flags); - } - if (ack_reason >= 0) rxrpc_send_ACK(call, ack_reason, serial, rxrpc_propose_ack_input_data); @@ -510,7 +502,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb &jhdr, sizeof(jhdr)) < 0) goto protocol_error; - jskb = skb_clone(skb, GFP_ATOMIC); + jskb = skb_clone(skb, GFP_NOFS); if (!jskb) { kdebug("couldn't clone"); return false; @@ -562,24 +554,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) if (state >= RXRPC_CALL_COMPLETE) return; - /* Unshare the packet so that it can be modified for in-place - * decryption. - */ - if (sp->hdr.securityIndex != 0) { - struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC); - if (!nskb) { - rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem); - return; - } - - if (nskb != skb) { - rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare); - skb = nskb; - rxrpc_new_skb(skb, rxrpc_skb_new_unshared); - sp = rxrpc_skb(skb); - } - } - if (state == RXRPC_CALL_SERVER_RECV_REQUEST) { unsigned long timo = READ_ONCE(call->next_req_timo); unsigned long now, expect_req_by; @@ -599,15 +573,15 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) if ((state == RXRPC_CALL_CLIENT_SEND_REQUEST || state == RXRPC_CALL_CLIENT_AWAIT_REPLY) && !rxrpc_receiving_reply(call)) - goto out; + goto out_notify; if (!rxrpc_input_split_jumbo(call, skb)) { rxrpc_proto_abort("VLD", call, sp->hdr.seq); - goto out; + goto out_notify; } skb = NULL; -out: +out_notify: trace_rxrpc_notify_socket(call->debug_id, serial); rxrpc_notify_socket(call); _leave(" [queued]"); @@ -667,32 +641,6 @@ static void rxrpc_complete_rtt_probe(struct rxrpc_call *call, trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_lost, 9, 0, acked_serial, 0, 0); } -/* - * Process the response to a ping that we sent to find out if we lost an ACK. - * - * If we got back a ping response that indicates a lower tx_top than what we - * had at the time of the ping transmission, we adjudge all the DATA packets - * sent between the response tx_top and the ping-time tx_top to have been lost. - */ -static void rxrpc_input_check_for_lost_ack(struct rxrpc_call *call) -{ - if (after(call->acks_lost_top, call->acks_prev_seq) && - !test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events)) - rxrpc_queue_call(call, rxrpc_call_queue_resend); -} - -/* - * Process a ping response. - */ -static void rxrpc_input_ping_response(struct rxrpc_call *call, - ktime_t resp_time, - rxrpc_serial_t acked_serial, - rxrpc_serial_t ack_serial) -{ - if (acked_serial == call->acks_lost_ping) - rxrpc_input_check_for_lost_ack(call); -} - /* * Process the extra information that may be appended to an ACK packet */ @@ -801,7 +749,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) struct rxrpc_ackpacket ack; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_ackinfo info; - struct sk_buff *skb_old = NULL; rxrpc_serial_t ack_serial, acked_serial; rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt; int nr_acks, offset, ioffset; @@ -809,10 +756,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) _enter(""); offset = sizeof(struct rxrpc_wire_header); - if (skb_copy_bits(skb, offset, &ack, sizeof(ack)) < 0) { - rxrpc_proto_abort("XAK", call, 0); - goto out; - } + if (skb_copy_bits(skb, offset, &ack, sizeof(ack)) < 0) + return rxrpc_proto_abort("XAK", call, 0); offset += sizeof(ack); ack_serial = sp->hdr.serial; @@ -863,7 +808,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_is_client_call(call)) { rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 0, -ENETRESET); - goto out; + return; } /* If we get an OUT_OF_SEQUENCE ACK from the server, that can also @@ -877,7 +822,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_is_client_call(call)) { rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 0, -ENETRESET); - goto out; + return; } /* Discard any out-of-order or duplicate ACKs (outside lock). */ @@ -885,39 +830,25 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, first_soft_ack, call->acks_first_seq, prev_pkt, call->acks_prev_seq); - goto out; + return; } info.rxMTU = 0; ioffset = offset + nr_acks + 3; if (skb->len >= ioffset + sizeof(info) && - skb_copy_bits(skb, ioffset, &info, sizeof(info)) < 0) { - rxrpc_proto_abort("XAI", call, 0); - goto out; - } + skb_copy_bits(skb, ioffset, &info, sizeof(info)) < 0) + return rxrpc_proto_abort("XAI", call, 0); if (nr_acks > 0) skb_condense(skb); - /* Discard any out-of-order or duplicate ACKs (inside lock). */ - if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { - trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, - first_soft_ack, call->acks_first_seq, - prev_pkt, call->acks_prev_seq); - goto out; - } call->acks_latest_ts = skb->tstamp; - call->acks_first_seq = first_soft_ack; call->acks_prev_seq = prev_pkt; switch (ack.reason) { case RXRPC_ACK_PING: break; - case RXRPC_ACK_PING_RESPONSE: - rxrpc_input_ping_response(call, skb->tstamp, acked_serial, - ack_serial); - fallthrough; default: if (after(acked_serial, call->acks_highest_serial)) call->acks_highest_serial = acked_serial; @@ -928,10 +859,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) if (info.rxMTU) rxrpc_input_ackinfo(call, skb, &info); - if (first_soft_ack == 0) { - rxrpc_proto_abort("AK0", call, 0); - goto out; - } + if (first_soft_ack == 0) + return rxrpc_proto_abort("AK0", call, 0); /* Ignore ACKs unless we are or have just been transmitting. */ switch (READ_ONCE(call->state)) { @@ -941,45 +870,27 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) case RXRPC_CALL_SERVER_AWAIT_ACK: break; default: - goto out; + return; } if (before(hard_ack, call->acks_hard_ack) || - after(hard_ack, call->tx_top)) { - rxrpc_proto_abort("AKW", call, 0); - goto out; - } - if (nr_acks > call->tx_top - hard_ack) { - rxrpc_proto_abort("AKN", call, 0); - goto out; - } + after(hard_ack, call->tx_top)) + return rxrpc_proto_abort("AKW", call, 0); + if (nr_acks > call->tx_top - hard_ack) + return rxrpc_proto_abort("AKN", call, 0); if (after(hard_ack, call->acks_hard_ack)) { if (rxrpc_rotate_tx_window(call, hard_ack, &summary)) { rxrpc_end_tx_phase(call, false, "ETA"); - goto out; + return; } } if (nr_acks > 0) { - if (offset > (int)skb->len - nr_acks) { - rxrpc_proto_abort("XSA", call, 0); - goto out; - } - - rxrpc_get_skb(skb, rxrpc_skb_get_ack); - spin_lock(&call->acks_ack_lock); - skb_old = call->acks_soft_tbl; - call->acks_soft_tbl = skb; - spin_unlock(&call->acks_ack_lock); - + if (offset > (int)skb->len - nr_acks) + return rxrpc_proto_abort("XSA", call, 0); rxrpc_input_soft_acks(call, skb->data + offset, first_soft_ack, nr_acks, &summary); - } else if (call->acks_soft_tbl) { - spin_lock(&call->acks_ack_lock); - skb_old = call->acks_soft_tbl; - call->acks_soft_tbl = NULL; - spin_unlock(&call->acks_ack_lock); } if (test_bit(RXRPC_CALL_TX_LAST, &call->flags) && @@ -989,8 +900,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_propose_ack_ping_for_lost_reply); rxrpc_congestion_management(call, skb, &summary, acked_serial); -out: - rxrpc_free_skb(skb_old, rxrpc_skb_put_ack); } /* @@ -1020,13 +929,20 @@ static void rxrpc_input_abort(struct rxrpc_call *call, struct sk_buff *skb) /* * Process an incoming call packet. */ -void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) +void rxrpc_input_call_packet(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); unsigned long timo; _enter("%p,%p", call, skb); + if (sp->hdr.serviceId != call->dest_srx.srx_service) + call->dest_srx.srx_service = sp->hdr.serviceId; + if ((int)sp->hdr.serial - (int)call->rx_serial > 0) + call->rx_serial = sp->hdr.serial; + if (!test_bit(RXRPC_CALL_RX_HEARD, &call->flags)) + set_bit(RXRPC_CALL_RX_HEARD, &call->flags); + timo = READ_ONCE(call->next_rx_timo); if (timo) { unsigned long now = jiffies, expect_rx_by; @@ -1072,9 +988,10 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) * * TODO: If callNumber > call_id + 1, renegotiate security. */ -void rxrpc_input_implicit_end_call(struct rxrpc_connection *conn, - struct rxrpc_call *call) +void rxrpc_implicit_end_call(struct rxrpc_call *call, struct sk_buff *skb) { + struct rxrpc_connection *conn = call->conn; + switch (READ_ONCE(call->state)) { case RXRPC_CALL_SERVER_AWAIT_ACK: rxrpc_call_completed(call); @@ -1082,14 +999,14 @@ void rxrpc_input_implicit_end_call(struct rxrpc_connection *conn, case RXRPC_CALL_COMPLETE: break; default: - if (rxrpc_abort_call("IMP", call, 0, RX_CALL_DEAD, -ESHUTDOWN)) { - set_bit(RXRPC_CALL_EV_ABORT, &call->events); - rxrpc_queue_call(call, rxrpc_call_queue_abort); - } + if (rxrpc_abort_call("IMP", call, 0, RX_CALL_DEAD, -ESHUTDOWN)) + rxrpc_send_abort_packet(call); trace_rxrpc_improper_term(call); break; } + rxrpc_input_call_event(call, skb); + spin_lock(&conn->bundle->channel_lock); __rxrpc_disconnect_call(conn, call); spin_unlock(&conn->bundle->channel_lock); diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index bc65d83fab88..19aa315eddf5 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -9,6 +9,10 @@ #include "ar-internal.h" +static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, + struct sockaddr_rxrpc *peer_srx, + struct sk_buff *skb); + /* * handle data received on the local endpoint * - may be called in interrupt context @@ -63,45 +67,19 @@ void rxrpc_error_report(struct sock *sk) } /* - * post connection-level events to the connection - * - this includes challenges, responses, some aborts and call terminal packet - * retransmission. + * Process event packets targeted at a local endpoint. */ -static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, - struct sk_buff *skb) +static void rxrpc_input_version(struct rxrpc_local *local, struct sk_buff *skb) { - _enter("%p,%p", conn, skb); - - rxrpc_get_skb(skb, rxrpc_skb_get_conn_work); - skb_queue_tail(&conn->rx_queue, skb); - rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work); -} + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + char v; -/* - * post endpoint-level events to the local endpoint - * - this includes debug and version messages - */ -static void rxrpc_post_packet_to_local(struct rxrpc_local *local, - struct sk_buff *skb) -{ - _enter("%p,%p", local, skb); + _enter(""); - if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { - rxrpc_get_skb(skb, rxrpc_skb_get_local_work); - skb_queue_tail(&local->event_queue, skb); - rxrpc_queue_local(local); - } -} - -/* - * put a packet up for transport-level abort - */ -static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) -{ - if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) { - rxrpc_get_skb(skb, rxrpc_skb_get_reject_work); - skb_queue_tail(&local->reject_queue, skb); - rxrpc_queue_local(local); + rxrpc_see_skb(skb, rxrpc_skb_see_version); + if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), &v, 1) >= 0) { + if (v == 0) + rxrpc_send_version_request(local, &sp->hdr, skb); } } @@ -156,22 +134,13 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) { struct rxrpc_connection *conn; struct sockaddr_rxrpc peer_srx; - struct rxrpc_channel *chan; - struct rxrpc_call *call = NULL; struct rxrpc_skb_priv *sp; struct rxrpc_peer *peer = NULL; - struct rxrpc_sock *rx = NULL; struct sk_buff *skb = *_skb; - unsigned int channel; - - if (skb->tstamp == 0) - skb->tstamp = ktime_get_real(); + int ret = 0; skb_pull(skb, sizeof(struct udphdr)); - /* The UDP protocol already released all skb resources; - * we are free to add our own data there. - */ sp = rxrpc_skb(skb); /* dig out the RxRPC connection details */ @@ -186,15 +155,13 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) } } - if (skb->tstamp == 0) - skb->tstamp = ktime_get_real(); trace_rxrpc_rx_packet(sp); switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_VERSION: if (rxrpc_to_client(sp)) return 0; - rxrpc_post_packet_to_local(local, skb); + rxrpc_input_version(local, skb); return 0; case RXRPC_PACKET_TYPE_BUSY: @@ -259,7 +226,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) goto bad_message; if (WARN_ON_ONCE(rxrpc_extract_addr_from_skb(&peer_srx, skb) < 0)) - return 0; /* Unsupported address type - discard. */ + return true; /* Unsupported address type - discard. */ if (peer_srx.transport.family != local->srx.transport.family && (peer_srx.transport.family == AF_INET && @@ -267,171 +234,172 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) pr_warn_ratelimited("AF_RXRPC: Protocol mismatch %u not %u\n", peer_srx.transport.family, local->srx.transport.family); - return 0; /* Wrong address type - discard. */ + return true; /* Wrong address type - discard. */ + } + + if (rxrpc_to_client(sp)) { + rcu_read_lock(); + conn = rxrpc_find_client_connection_rcu(local, &peer_srx, skb); + conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_call_input); + rcu_read_unlock(); + if (!conn) { + trace_rxrpc_abort(0, "NCC", sp->hdr.cid, + sp->hdr.callNumber, sp->hdr.seq, + RXKADINCONSISTENCY, EBADMSG); + goto protocol_error; + } + + ret = rxrpc_input_packet_on_conn(conn, &peer_srx, skb); + rxrpc_put_connection(conn, rxrpc_conn_put_call_input); + return ret; } + /* We need to look up service connections by the full protocol + * parameter set. We look up the peer first as an intermediate step + * and then the connection from the peer's tree. + */ rcu_read_lock(); - if (rxrpc_to_server(sp)) { - /* Weed out packets to services we're not offering. Packets - * that would begin a call are explicitly rejected and the rest - * are just discarded. - */ - rx = rcu_dereference(local->service); - if (!rx || (sp->hdr.serviceId != rx->srx.srx_service && - sp->hdr.serviceId != rx->second_service) - ) { - rcu_read_unlock(); - if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA && - sp->hdr.seq == 1) - goto unsupported_service; - return 0; - } + peer = rxrpc_lookup_peer_rcu(local, &peer_srx); + if (!peer) { + rcu_read_unlock(); + return rxrpc_new_incoming_call(local, NULL, NULL, &peer_srx, skb); } - conn = rxrpc_find_connection_rcu(local, &peer_srx, skb, &peer); + conn = rxrpc_find_service_conn_rcu(peer, skb); + conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_call_input); if (conn) { - if (sp->hdr.securityIndex != conn->security_ix) - goto wrong_security; + rcu_read_unlock(); + ret = rxrpc_input_packet_on_conn(conn, &peer_srx, skb); + rxrpc_put_connection(conn, rxrpc_conn_put_call_input); + return ret; + } - if (sp->hdr.serviceId != conn->service_id) { - int old_id; + peer = rxrpc_get_peer_maybe(peer, rxrpc_peer_get_input); + rcu_read_unlock(); - if (!test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags)) - goto reupgrade; - old_id = cmpxchg(&conn->service_id, conn->orig_service_id, - sp->hdr.serviceId); + ret = rxrpc_new_incoming_call(local, peer, NULL, &peer_srx, skb); + rxrpc_put_peer(peer, rxrpc_peer_put_input); + if (ret < 0) + goto reject_packet; + return 0; - if (old_id != conn->orig_service_id && - old_id != sp->hdr.serviceId) - goto reupgrade; - } +bad_message: + trace_rxrpc_abort(0, "BAD", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, + RX_PROTOCOL_ERROR, EBADMSG); +protocol_error: + skb->priority = RX_PROTOCOL_ERROR; + skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; +reject_packet: + rxrpc_reject_packet(local, skb); + return ret; +} - if (sp->hdr.callNumber == 0) { - /* Connection-level packet */ - _debug("CONN %p {%d}", conn, conn->debug_id); - conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_conn_input); - rcu_read_unlock(); - if (conn) { - rxrpc_post_packet_to_conn(conn, skb); - rxrpc_put_connection(conn, rxrpc_conn_put_conn_input); - } - return 0; - } +/* + * Deal with a packet that's associated with an extant connection. + */ +static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, + struct sockaddr_rxrpc *peer_srx, + struct sk_buff *skb) +{ + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + struct rxrpc_channel *chan; + struct rxrpc_call *call = NULL; + unsigned int channel; - if ((int)sp->hdr.serial - (int)conn->hi_serial > 0) - conn->hi_serial = sp->hdr.serial; + if (sp->hdr.securityIndex != conn->security_ix) + goto wrong_security; - /* Call-bound packets are routed by connection channel. */ - channel = sp->hdr.cid & RXRPC_CHANNELMASK; - chan = &conn->channels[channel]; + if (sp->hdr.serviceId != conn->service_id) { + int old_id; - /* Ignore really old calls */ - if (sp->hdr.callNumber < chan->last_call) { - rcu_read_unlock(); - return 0; - } + if (!test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags)) + goto reupgrade; + old_id = cmpxchg(&conn->service_id, conn->orig_service_id, + sp->hdr.serviceId); - if (sp->hdr.callNumber == chan->last_call) { - if (chan->call || - sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) { - rcu_read_unlock(); - return 0; - } + if (old_id != conn->orig_service_id && + old_id != sp->hdr.serviceId) + goto reupgrade; + } - /* For the previous service call, if completed - * successfully, we discard all further packets. - */ - if (rxrpc_conn_is_service(conn) && - chan->last_type == RXRPC_PACKET_TYPE_ACK) { - rcu_read_unlock(); - return 0; - } + if (after(sp->hdr.serial, conn->hi_serial)) + conn->hi_serial = sp->hdr.serial; - /* But otherwise we need to retransmit the final packet - * from data cached in the connection record. - */ - if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA) - trace_rxrpc_rx_data(chan->call_debug_id, - sp->hdr.seq, - sp->hdr.serial, - sp->hdr.flags); - conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_call_input); - rcu_read_unlock(); - if (conn) { - rxrpc_post_packet_to_conn(conn, skb); - rxrpc_put_connection(conn, rxrpc_conn_put_call_input); - } + /* It's a connection-level packet if the call number is 0. */ + if (sp->hdr.callNumber == 0) + return rxrpc_input_conn_packet(conn, skb); + + /* Call-bound packets are routed by connection channel. */ + channel = sp->hdr.cid & RXRPC_CHANNELMASK; + chan = &conn->channels[channel]; + + /* Ignore really old calls */ + if (sp->hdr.callNumber < chan->last_call) + return 0; + + if (sp->hdr.callNumber == chan->last_call) { + if (chan->call || + sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) return 0; - } - call = rcu_dereference(chan->call); + /* For the previous service call, if completed successfully, we + * discard all further packets. + */ + if (rxrpc_conn_is_service(conn) && + chan->last_type == RXRPC_PACKET_TYPE_ACK) + return 0; - if (sp->hdr.callNumber > chan->call_id) { - if (rxrpc_to_client(sp)) { - rcu_read_unlock(); - goto reject_packet; - } - if (call) { - rxrpc_input_implicit_end_call(conn, call); - chan->call = NULL; - call = NULL; - } - } + /* But otherwise we need to retransmit the final packet from + * data cached in the connection record. + */ + if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA) + trace_rxrpc_rx_data(chan->call_debug_id, + sp->hdr.seq, + sp->hdr.serial, + sp->hdr.flags); + rxrpc_input_conn_packet(conn, skb); + return 0; + } - if (call && !rxrpc_try_get_call(call, rxrpc_call_get_input)) - call = NULL; + rcu_read_lock(); + call = rxrpc_try_get_call(rcu_dereference(chan->call), + rxrpc_call_get_input); + rcu_read_unlock(); + + if (sp->hdr.callNumber > chan->call_id) { + if (rxrpc_to_client(sp)) { + rxrpc_put_call(call, rxrpc_call_put_input); + goto reject_packet; + } if (call) { - if (sp->hdr.serviceId != call->dest_srx.srx_service) - call->dest_srx.srx_service = sp->hdr.serviceId; - if ((int)sp->hdr.serial - (int)call->rx_serial > 0) - call->rx_serial = sp->hdr.serial; - if (!test_bit(RXRPC_CALL_RX_HEARD, &call->flags)) - set_bit(RXRPC_CALL_RX_HEARD, &call->flags); + rxrpc_implicit_end_call(call, skb); + rxrpc_put_call(call, rxrpc_call_put_input); + call = NULL; } } if (!call) { - if (rxrpc_to_client(sp) || - sp->hdr.type != RXRPC_PACKET_TYPE_DATA) { - rcu_read_unlock(); + if (rxrpc_to_client(sp)) goto bad_message; - } - if (sp->hdr.seq != 1) { - rcu_read_unlock(); + if (rxrpc_new_incoming_call(conn->local, conn->peer, conn, + peer_srx, skb)) return 0; - } - call = rxrpc_new_incoming_call(local, rx, &peer_srx, skb); - if (!call) { - rcu_read_unlock(); - goto reject_packet; - } + goto reject_packet; } - rcu_read_unlock(); - - /* Process a call packet. */ rxrpc_input_call_event(call, skb); rxrpc_put_call(call, rxrpc_call_put_input); - trace_rxrpc_rx_done(0, 0); return 0; wrong_security: - rcu_read_unlock(); trace_rxrpc_abort(0, "SEC", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, RXKADINCONSISTENCY, EBADMSG); skb->priority = RXKADINCONSISTENCY; goto post_abort; -unsupported_service: - trace_rxrpc_abort(0, "INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_INVALID_OPERATION, EOPNOTSUPP); - skb->priority = RX_INVALID_OPERATION; - goto post_abort; - reupgrade: - rcu_read_unlock(); trace_rxrpc_abort(0, "UPG", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, RX_PROTOCOL_ERROR, EBADMSG); goto protocol_error; @@ -444,7 +412,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) post_abort: skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; reject_packet: - rxrpc_reject_packet(local, skb); + rxrpc_reject_packet(conn->local, skb); return 0; } @@ -479,6 +447,11 @@ int rxrpc_io_thread(void *data) continue; } + if (!list_empty(&local->ack_tx_queue)) { + rxrpc_transmit_ack_packets(local); + continue; + } + /* Process received packets and errors. */ if ((skb = __skb_dequeue(&rx_queue))) { switch (skb->mark) { diff --git a/net/rxrpc/local_event.c b/net/rxrpc/local_event.c index c344383a20b2..5e69ea6b233d 100644 --- a/net/rxrpc/local_event.c +++ b/net/rxrpc/local_event.c @@ -21,9 +21,9 @@ static const char rxrpc_version_string[65] = "linux-" UTS_RELEASE " AF_RXRPC"; /* * Reply to a version request */ -static void rxrpc_send_version_request(struct rxrpc_local *local, - struct rxrpc_host_header *hdr, - struct sk_buff *skb) +void rxrpc_send_version_request(struct rxrpc_local *local, + struct rxrpc_host_header *hdr, + struct sk_buff *skb) { struct rxrpc_wire_header whdr; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); @@ -73,40 +73,3 @@ static void rxrpc_send_version_request(struct rxrpc_local *local, _leave(""); } - -/* - * Process event packets targeted at a local endpoint. - */ -void rxrpc_process_local_events(struct rxrpc_local *local) -{ - struct sk_buff *skb; - char v; - - _enter(""); - - skb = skb_dequeue(&local->event_queue); - if (skb) { - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - - rxrpc_see_skb(skb, rxrpc_skb_see_local_work); - _debug("{%d},{%u}", local->debug_id, sp->hdr.type); - - switch (sp->hdr.type) { - case RXRPC_PACKET_TYPE_VERSION: - if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), - &v, 1) < 0) - return; - if (v == 0) - rxrpc_send_version_request(local, &sp->hdr, skb); - break; - - default: - /* Just ignore anything we don't understand */ - break; - } - - rxrpc_free_skb(skb, rxrpc_skb_put_input); - } - - _leave(""); -} diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 03f491cc23ef..c73a5a1bc088 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -20,7 +20,6 @@ #include #include "ar-internal.h" -static void rxrpc_local_processor(struct work_struct *); static void rxrpc_local_rcu(struct rcu_head *); /* @@ -97,12 +96,9 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, atomic_set(&local->active_users, 1); local->rxnet = rxnet; INIT_HLIST_NODE(&local->link); - INIT_WORK(&local->processor, rxrpc_local_processor); INIT_LIST_HEAD(&local->ack_tx_queue); spin_lock_init(&local->ack_tx_lock); init_rwsem(&local->defrag_sem); - skb_queue_head_init(&local->reject_queue); - skb_queue_head_init(&local->event_queue); skb_queue_head_init(&local->rx_queue); INIT_LIST_HEAD(&local->call_attend_q); local->client_bundles = RB_ROOT; @@ -318,21 +314,6 @@ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local, return NULL; } -/* - * Queue a local endpoint and pass the caller's reference to the work item. - */ -void rxrpc_queue_local(struct rxrpc_local *local) -{ - unsigned int debug_id = local->debug_id; - int r = refcount_read(&local->ref); - int u = atomic_read(&local->active_users); - - if (rxrpc_queue_work(&local->processor)) - trace_rxrpc_local(debug_id, rxrpc_local_queued, r, u); - else - rxrpc_put_local(local, rxrpc_local_put_already_queued); -} - /* * Drop a ref on a local endpoint. */ @@ -374,7 +355,7 @@ struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local, /* * Cease using a local endpoint. Once the number of active users reaches 0, we - * start the closure of the transport in the work processor. + * start the closure of the transport in the I/O thread.. */ void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why) { @@ -416,52 +397,9 @@ void rxrpc_destroy_local(struct rxrpc_local *local) /* At this point, there should be no more packets coming in to the * local endpoint. */ - rxrpc_purge_queue(&local->reject_queue); - rxrpc_purge_queue(&local->event_queue); rxrpc_purge_queue(&local->rx_queue); } -/* - * Process events on an endpoint. The work item carries a ref which - * we must release. - */ -static void rxrpc_local_processor(struct work_struct *work) -{ - struct rxrpc_local *local = - container_of(work, struct rxrpc_local, processor); - bool again; - - if (local->dead) - return; - - rxrpc_see_local(local, rxrpc_local_processing); - - do { - again = false; - if (!__rxrpc_use_local(local, rxrpc_local_use_work)) - break; - - if (!list_empty(&local->ack_tx_queue)) { - rxrpc_transmit_ack_packets(local); - again = true; - } - - if (!skb_queue_empty(&local->reject_queue)) { - rxrpc_reject_packets(local); - again = true; - } - - if (!skb_queue_empty(&local->event_queue)) { - rxrpc_process_local_events(local); - again = true; - } - - __rxrpc_unuse_local(local, rxrpc_local_unuse_work); - } while (again); - - rxrpc_put_local(local, rxrpc_local_put_queue); -} - /* * Destroy a local endpoint after the RCU grace period expires. */ @@ -469,13 +407,8 @@ static void rxrpc_local_rcu(struct rcu_head *rcu) { struct rxrpc_local *local = container_of(rcu, struct rxrpc_local, rcu); - _enter("%d", local->debug_id); - - ASSERT(!work_pending(&local->processor)); - rxrpc_see_local(local, rxrpc_local_free); kfree(local); - _leave(""); } /* diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 71963b4523be..2ea1fa1b8a6f 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -229,11 +229,6 @@ static int rxrpc_send_ack_packet(struct rxrpc_local *local, struct rxrpc_txbuf * if (txb->ack.reason == RXRPC_ACK_PING) txb->wire.flags |= RXRPC_REQUEST_ACK; - if (txb->ack.reason == RXRPC_ACK_DELAY) - clear_bit(RXRPC_CALL_DELAY_ACK_PENDING, &call->flags); - if (txb->ack.reason == RXRPC_ACK_IDLE) - clear_bit(RXRPC_CALL_IDLE_ACK_PENDING, &call->flags); - n = rxrpc_fill_out_ack(conn, call, txb); if (n == 0) return 0; @@ -247,8 +242,6 @@ static int rxrpc_send_ack_packet(struct rxrpc_local *local, struct rxrpc_txbuf * trace_rxrpc_tx_ack(call->debug_id, serial, ntohl(txb->ack.firstPacket), ntohl(txb->ack.serial), txb->ack.reason, txb->ack.nAcks); - if (txb->ack_why == rxrpc_propose_ack_ping_for_lost_ack) - call->acks_lost_ping = serial; if (txb->ack.reason == RXRPC_ACK_PING) rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_ping); @@ -588,21 +581,20 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } /* - * reject packets through the local endpoint + * Reject a packet through the local endpoint. */ -void rxrpc_reject_packets(struct rxrpc_local *local) +void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) { - struct sockaddr_rxrpc srx; - struct rxrpc_skb_priv *sp; struct rxrpc_wire_header whdr; - struct sk_buff *skb; + struct sockaddr_rxrpc srx; + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct msghdr msg; struct kvec iov[2]; size_t size; __be32 code; int ret, ioc; - _enter("%d", local->debug_id); + rxrpc_see_skb(skb, rxrpc_skb_see_reject); iov[0].iov_base = &whdr; iov[0].iov_len = sizeof(whdr); @@ -616,52 +608,42 @@ void rxrpc_reject_packets(struct rxrpc_local *local) memset(&whdr, 0, sizeof(whdr)); - while ((skb = skb_dequeue(&local->reject_queue))) { - rxrpc_see_skb(skb, rxrpc_skb_see_reject); - sp = rxrpc_skb(skb); + switch (skb->mark) { + case RXRPC_SKB_MARK_REJECT_BUSY: + whdr.type = RXRPC_PACKET_TYPE_BUSY; + size = sizeof(whdr); + ioc = 1; + break; + case RXRPC_SKB_MARK_REJECT_ABORT: + whdr.type = RXRPC_PACKET_TYPE_ABORT; + code = htonl(skb->priority); + size = sizeof(whdr) + sizeof(code); + ioc = 2; + break; + default: + return; + } - switch (skb->mark) { - case RXRPC_SKB_MARK_REJECT_BUSY: - whdr.type = RXRPC_PACKET_TYPE_BUSY; - size = sizeof(whdr); - ioc = 1; - break; - case RXRPC_SKB_MARK_REJECT_ABORT: - whdr.type = RXRPC_PACKET_TYPE_ABORT; - code = htonl(skb->priority); - size = sizeof(whdr) + sizeof(code); - ioc = 2; - break; - default: - rxrpc_free_skb(skb, rxrpc_skb_put_input); - continue; - } + if (rxrpc_extract_addr_from_skb(&srx, skb) == 0) { + msg.msg_namelen = srx.transport_len; - if (rxrpc_extract_addr_from_skb(&srx, skb) == 0) { - msg.msg_namelen = srx.transport_len; - - whdr.epoch = htonl(sp->hdr.epoch); - whdr.cid = htonl(sp->hdr.cid); - whdr.callNumber = htonl(sp->hdr.callNumber); - whdr.serviceId = htons(sp->hdr.serviceId); - whdr.flags = sp->hdr.flags; - whdr.flags ^= RXRPC_CLIENT_INITIATED; - whdr.flags &= RXRPC_CLIENT_INITIATED; - - iov_iter_kvec(&msg.msg_iter, WRITE, iov, ioc, size); - ret = do_udp_sendmsg(local->socket, &msg, size); - if (ret < 0) - trace_rxrpc_tx_fail(local->debug_id, 0, ret, - rxrpc_tx_point_reject); - else - trace_rxrpc_tx_packet(local->debug_id, &whdr, - rxrpc_tx_point_reject); - } + whdr.epoch = htonl(sp->hdr.epoch); + whdr.cid = htonl(sp->hdr.cid); + whdr.callNumber = htonl(sp->hdr.callNumber); + whdr.serviceId = htons(sp->hdr.serviceId); + whdr.flags = sp->hdr.flags; + whdr.flags ^= RXRPC_CLIENT_INITIATED; + whdr.flags &= RXRPC_CLIENT_INITIATED; - rxrpc_free_skb(skb, rxrpc_skb_put_input); + iov_iter_kvec(&msg.msg_iter, WRITE, iov, ioc, size); + ret = do_udp_sendmsg(local->socket, &msg, size); + if (ret < 0) + trace_rxrpc_tx_fail(local->debug_id, 0, ret, + rxrpc_tx_point_reject); + else + trace_rxrpc_tx_packet(local->debug_id, &whdr, + rxrpc_tx_point_reject); } - - _leave(""); } /* diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index 97d017ca3dc4..fb8096e93d2c 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -18,9 +18,9 @@ #include #include "ar-internal.h" -static void rxrpc_store_error(struct rxrpc_peer *, struct sock_exterr_skb *); -static void rxrpc_distribute_error(struct rxrpc_peer *, int, - enum rxrpc_call_completion); +static void rxrpc_store_error(struct rxrpc_peer *, struct sk_buff *); +static void rxrpc_distribute_error(struct rxrpc_peer *, struct sk_buff *, + enum rxrpc_call_completion, int); /* * Find the peer associated with a local error. @@ -161,7 +161,7 @@ void rxrpc_input_error(struct rxrpc_local *local, struct sk_buff *skb) goto out; } - rxrpc_store_error(peer, serr); + rxrpc_store_error(peer, skb); out: rxrpc_put_peer(peer, rxrpc_peer_put_input_error); } @@ -169,19 +169,15 @@ void rxrpc_input_error(struct rxrpc_local *local, struct sk_buff *skb) /* * Map an error report to error codes on the peer record. */ -static void rxrpc_store_error(struct rxrpc_peer *peer, - struct sock_exterr_skb *serr) +static void rxrpc_store_error(struct rxrpc_peer *peer, struct sk_buff *skb) { enum rxrpc_call_completion compl = RXRPC_CALL_NETWORK_ERROR; - struct sock_extended_err *ee; - int err; + struct sock_exterr_skb *serr = SKB_EXT_ERR(skb); + struct sock_extended_err *ee = &serr->ee; + int err = ee->ee_errno; _enter(""); - ee = &serr->ee; - - err = ee->ee_errno; - switch (ee->ee_origin) { case SO_EE_ORIGIN_NONE: case SO_EE_ORIGIN_LOCAL: @@ -197,14 +193,14 @@ static void rxrpc_store_error(struct rxrpc_peer *peer, break; } - rxrpc_distribute_error(peer, err, compl); + rxrpc_distribute_error(peer, skb, compl, err); } /* * Distribute an error that occurred on a peer. */ -static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error, - enum rxrpc_call_completion compl) +static void rxrpc_distribute_error(struct rxrpc_peer *peer, struct sk_buff *skb, + enum rxrpc_call_completion compl, int err) { struct rxrpc_call *call; HLIST_HEAD(error_targets); @@ -219,7 +215,8 @@ static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error, spin_unlock(&peer->lock); rxrpc_see_call(call, rxrpc_call_see_distribute_error); - rxrpc_set_call_completion(call, compl, 0, -error); + rxrpc_set_call_completion(call, compl, 0, -err); + rxrpc_input_call_event(call, skb); spin_lock(&peer->lock); } diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 5df7f468abed..77d03b9e4c4c 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -253,11 +253,8 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) acked = atomic_add_return(call->rx_consumed - old_consumed, &call->ackr_nr_consumed); if (acked > 2 && - !test_and_set_bit(RXRPC_CALL_IDLE_ACK_PENDING, &call->flags)) { - rxrpc_send_ACK(call, RXRPC_ACK_IDLE, serial, - rxrpc_propose_ack_rotate_rx); - rxrpc_transmit_ack_packets(call->peer->local); - } + !test_and_set_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags)) + rxrpc_poke_call(call, rxrpc_call_poke_idle); } /* @@ -377,7 +374,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, trace_rxrpc_recvdata(call, rxrpc_recvmsg_data_return, seq, rx_pkt_offset, rx_pkt_len, ret); if (ret == -EAGAIN) - set_bit(RXRPC_CALL_RX_UNDERRUN, &call->flags); + set_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags); return ret; } diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 11af37275d5b..58e0a36f6aa9 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -170,7 +170,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, { unsigned long now; rxrpc_seq_t seq = txb->seq; - bool last = test_bit(RXRPC_TXBUF_LAST, &txb->flags); + bool last = test_bit(RXRPC_TXBUF_LAST, &txb->flags), poke; rxrpc_inc_stat(call->rxnet, stat_tx_data); @@ -188,6 +188,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, /* Add the packet to the call's output buffer */ spin_lock(&call->tx_lock); + poke = list_empty(&call->tx_sendmsg); list_add_tail(&txb->call_link, &call->tx_sendmsg); call->tx_prepared = seq; spin_unlock(&call->tx_lock); @@ -220,11 +221,8 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, write_unlock_bh(&call->state_lock); } - - /* Stick the packet on the crypto queue or the transmission queue as - * appropriate. - */ - rxrpc_queue_call(call, rxrpc_call_queue_tx_data); + if (poke) + rxrpc_poke_call(call, rxrpc_call_poke_start); } /* From patchwork Wed Nov 30 16:58:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27907 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042404wrr; Wed, 30 Nov 2022 09:05:26 -0800 (PST) X-Google-Smtp-Source: AA0mqf4mUBKLXaGFbygY9ldNcBUBZ7iwuxZpok4HhDBO2fhcdAdsePGES18kgfYa9k5AU1V8cktV X-Received: by 2002:a17:906:4309:b0:78d:36d7:92ae with SMTP id j9-20020a170906430900b0078d36d792aemr36695400ejm.113.1669827925974; Wed, 30 Nov 2022 09:05:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827925; cv=none; d=google.com; s=arc-20160816; b=K1a/da1s+6IUm91k8G7X9HYFcsAu7Ai6YFSDqpiRwIjzzY5u84Yiifjd3ToJary3Ns MPw4R7Q65KMHJoiV0PHtvxBCjuYDKqNnPWL52H42m689kwr2M2Dt8qeZc+tw224bsABK xG+ugdn12+Rx3WGzle2f7424/TNOTC3P7gqEZSqLR/+6oAp6Ki5H73TOQFFqjG2RfMpl KnxFQ66DYrtnZOtI0f+M4klxlc7VO6uIxZQsSuRA8Epk3thOVLK/77QmUceYqZhlhNwn m7b0Y3TU2Exu9SbN0b+n63lYTmtYns/bZeNqXeIH4MDC03D+Bzby1llFYueCQzZcJVv3 GUyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=XKe8BanoVhCwvvownjLNdqC0xZgMjP2lmZkgV07bsRk=; b=yCHNgTzpXV5tiOJMNKfJVU5/E8t7d2zoeECkDf1W4VQAgNIble2JdukgY3WZTex3YP LC2WgOW/OyetEipSKC5nNeXETwrLsOCH58J/Xqxqej0ECd0S0TuZW2gBgp1r0HuG32js 74Cye0RH6IPhiSjjuZmrCwKRoePez6pDImrB0hx1kmjFoBoJ6WkWnOU6dQ1OD3D+0rF8 153MtTEI2VZIrzYcAGv6D8p0nzlTQ74DDeZ9faQpO3udaeORGXydKm6eOKT2EKnugM+D EFzoov9hu6AsO+JmqJ6W8wDHQ/QPVzWhDWGEg9HbSVqODupvBu871ha8YUtcVB6HVt+m ZUMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bqmvhOQm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dd11-20020a1709069b8b00b007ae832c0b69si1951662ejc.512.2022.11.30.09.05.01; Wed, 30 Nov 2022 09:05:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bqmvhOQm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230433AbiK3RDm (ORCPT + 99 others); Wed, 30 Nov 2022 12:03:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231157AbiK3RDE (ORCPT ); Wed, 30 Nov 2022 12:03:04 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD050900D6 for ; Wed, 30 Nov 2022 08:58:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827532; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XKe8BanoVhCwvvownjLNdqC0xZgMjP2lmZkgV07bsRk=; b=bqmvhOQm8QMuo2gICU8VUIQbdkK+2ulatAgBhelOuXTWCtPgP4SAoqOyAwFTZSIiaIQ/O4 +SkcBh9Urz2FjQys9jhTuA6fa2LjGRiYMuFOgzwvAME8H9K/q9hfcTtBDZwcHQJ1Zqq96r lbX0koAXUWJ+A46PG1JXE+9/qHrjfD0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-495-XzqVxU7aP76e0bH-yJYY6Q-1; Wed, 30 Nov 2022 11:58:50 -0500 X-MC-Unique: XzqVxU7aP76e0bH-yJYY6Q-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CB3E5101A52A; Wed, 30 Nov 2022 16:58:49 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id DC82E492B04; Wed, 30 Nov 2022 16:58:48 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 31/35] rxrpc: Remove the _bh annotation from all the spinlocks From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:58:46 +0000 Message-ID: <166982752629.621383.8913353266465133998.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941487268995980?= X-GMAIL-MSGID: =?utf-8?q?1750941487268995980?= None of the spinlocks in rxrpc need a _bh annotation now as the RCU callback routines no longer take spinlocks and the bulk of the packet wrangling code is now run in the I/O thread, not softirq context. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/af_rxrpc.c | 4 ++-- net/rxrpc/call_accept.c | 8 ++++---- net/rxrpc/call_event.c | 4 ++-- net/rxrpc/call_object.c | 20 ++++++++++---------- net/rxrpc/conn_client.c | 4 ++-- net/rxrpc/conn_event.c | 16 ++++++++-------- net/rxrpc/conn_service.c | 10 +++++----- net/rxrpc/input.c | 4 ++-- net/rxrpc/output.c | 8 ++++---- net/rxrpc/peer_event.c | 16 ++++++++-------- net/rxrpc/peer_object.c | 8 ++++---- net/rxrpc/recvmsg.c | 36 ++++++++++++++++++------------------ net/rxrpc/sendmsg.c | 12 ++++++------ 13 files changed, 75 insertions(+), 75 deletions(-) diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 8ad4d85acb0b..7ea576f6ba4b 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -359,9 +359,9 @@ void rxrpc_kernel_end_call(struct socket *sock, struct rxrpc_call *call) /* Make sure we're not going to call back into a kernel service */ if (call->notify_rx) { - spin_lock_bh(&call->notify_lock); + spin_lock(&call->notify_lock); call->notify_rx = rxrpc_dummy_notify_rx; - spin_unlock_bh(&call->notify_lock); + spin_unlock(&call->notify_lock); } mutex_unlock(&call->user_mutex); diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 86a4187fb2fb..603e7520384a 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -138,9 +138,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, write_unlock(&rx->call_lock); rxnet = call->rxnet; - spin_lock_bh(&rxnet->call_lock); + spin_lock(&rxnet->call_lock); list_add_tail_rcu(&call->link, &rxnet->calls); - spin_unlock_bh(&rxnet->call_lock); + spin_unlock(&rxnet->call_lock); b->call_backlog[call_head] = call; smp_store_release(&b->call_backlog_head, (call_head + 1) & (size - 1)); @@ -188,8 +188,8 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx) /* Make sure that there aren't any incoming calls in progress before we * clear the preallocation buffers. */ - spin_lock_bh(&rx->incoming_lock); - spin_unlock_bh(&rx->incoming_lock); + spin_lock(&rx->incoming_lock); + spin_unlock(&rx->incoming_lock); head = b->peer_backlog_head; tail = b->peer_backlog_tail; diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 9db62fa55c62..18591f9ecc6a 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -101,9 +101,9 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, return; } - spin_lock_bh(&local->ack_tx_lock); + spin_lock(&local->ack_tx_lock); list_add_tail(&txb->tx_link, &local->ack_tx_queue); - spin_unlock_bh(&local->ack_tx_lock); + spin_unlock(&local->ack_tx_lock); trace_rxrpc_send_ack(call, why, ack_reason, serial); rxrpc_wake_up_io_thread(local); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index d441a715d988..be5eb8cdf549 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -354,9 +354,9 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, write_unlock(&rx->call_lock); rxnet = call->rxnet; - spin_lock_bh(&rxnet->call_lock); + spin_lock(&rxnet->call_lock); list_add_tail_rcu(&call->link, &rxnet->calls); - spin_unlock_bh(&rxnet->call_lock); + spin_unlock(&rxnet->call_lock); /* From this point on, the call is protected by its own lock. */ release_sock(&rx->sk); @@ -537,7 +537,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) del_timer_sync(&call->timer); /* Make sure we don't get any more notifications */ - write_lock_bh(&rx->recvmsg_lock); + write_lock(&rx->recvmsg_lock); if (!list_empty(&call->recvmsg_link)) { _debug("unlinking once-pending call %p { e=%lx f=%lx }", @@ -550,7 +550,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) call->recvmsg_link.next = NULL; call->recvmsg_link.prev = NULL; - write_unlock_bh(&rx->recvmsg_lock); + write_unlock(&rx->recvmsg_lock); if (put) rxrpc_put_call(call, rxrpc_call_put_unnotify); @@ -622,9 +622,9 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace why) ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); if (!list_empty(&call->link)) { - spin_lock_bh(&rxnet->call_lock); + spin_lock(&rxnet->call_lock); list_del_init(&call->link); - spin_unlock_bh(&rxnet->call_lock); + spin_unlock(&rxnet->call_lock); } rxrpc_cleanup_call(call); @@ -706,7 +706,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet) _enter(""); if (!list_empty(&rxnet->calls)) { - spin_lock_bh(&rxnet->call_lock); + spin_lock(&rxnet->call_lock); while (!list_empty(&rxnet->calls)) { call = list_entry(rxnet->calls.next, @@ -721,12 +721,12 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet) rxrpc_call_states[call->state], call->flags, call->events); - spin_unlock_bh(&rxnet->call_lock); + spin_unlock(&rxnet->call_lock); cond_resched(); - spin_lock_bh(&rxnet->call_lock); + spin_lock(&rxnet->call_lock); } - spin_unlock_bh(&rxnet->call_lock); + spin_unlock(&rxnet->call_lock); } atomic_dec(&rxnet->nr_calls); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 3c7b1bdec0db..a08e33c9e54b 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -557,9 +557,9 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, trace_rxrpc_connect_call(call); - write_lock_bh(&call->state_lock); + write_lock(&call->state_lock); call->state = RXRPC_CALL_CLIENT_SEND_REQUEST; - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); /* Paired with the read barrier in rxrpc_connect_call(). This orders * cid and epoch in the connection wrt to call_id without the need to diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 643a56322224..480364bcbf85 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -198,9 +198,9 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, _enter("%d,,%u,%u", conn->debug_id, error, abort_code); /* generate a connection-level abort */ - spin_lock_bh(&conn->state_lock); + spin_lock(&conn->state_lock); if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { - spin_unlock_bh(&conn->state_lock); + spin_unlock(&conn->state_lock); _leave(" = 0 [already dead]"); return 0; } @@ -209,7 +209,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, conn->abort_code = abort_code; conn->state = RXRPC_CONN_LOCALLY_ABORTED; set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); - spin_unlock_bh(&conn->state_lock); + spin_unlock(&conn->state_lock); msg.msg_name = &conn->peer->srx.transport; msg.msg_namelen = conn->peer->srx.transport_len; @@ -265,12 +265,12 @@ static void rxrpc_call_is_secure(struct rxrpc_call *call) { _enter("%p", call); if (call) { - write_lock_bh(&call->state_lock); + write_lock(&call->state_lock); if (call->state == RXRPC_CALL_SERVER_SECURING) { call->state = RXRPC_CALL_SERVER_RECV_REQUEST; rxrpc_notify_socket(call); } - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); } } @@ -325,18 +325,18 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, return ret; spin_lock(&conn->bundle->channel_lock); - spin_lock_bh(&conn->state_lock); + spin_lock(&conn->state_lock); if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) { conn->state = RXRPC_CONN_SERVICE; - spin_unlock_bh(&conn->state_lock); + spin_unlock(&conn->state_lock); for (loop = 0; loop < RXRPC_MAXCALLS; loop++) rxrpc_call_is_secure( rcu_dereference_protected( conn->channels[loop].call, lockdep_is_held(&conn->bundle->channel_lock))); } else { - spin_unlock_bh(&conn->state_lock); + spin_unlock(&conn->state_lock); } spin_unlock(&conn->bundle->channel_lock); diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index b5ae7c753fc3..2a55a88b2a5b 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -73,7 +73,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer, struct rxrpc_conn_proto k = conn->proto; struct rb_node **pp, *parent; - write_seqlock_bh(&peer->service_conn_lock); + write_seqlock(&peer->service_conn_lock); pp = &peer->service_conns.rb_node; parent = NULL; @@ -94,14 +94,14 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer, rb_insert_color(&conn->service_node, &peer->service_conns); conn_published: set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags); - write_sequnlock_bh(&peer->service_conn_lock); + write_sequnlock(&peer->service_conn_lock); _leave(" = %d [new]", conn->debug_id); return; found_extant_conn: if (refcount_read(&cursor->ref) == 0) goto replace_old_connection; - write_sequnlock_bh(&peer->service_conn_lock); + write_sequnlock(&peer->service_conn_lock); /* We should not be able to get here. rxrpc_incoming_connection() is * called in a non-reentrant context, so there can't be a race to * insert a new connection. @@ -195,8 +195,8 @@ void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn) { struct rxrpc_peer *peer = conn->peer; - write_seqlock_bh(&peer->service_conn_lock); + write_seqlock(&peer->service_conn_lock); if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags)) rb_erase(&conn->service_node, &peer->service_conns); - write_sequnlock_bh(&peer->service_conn_lock); + write_sequnlock(&peer->service_conn_lock); } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 7ae7046f0b03..dd2ac5d55e1c 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -669,10 +669,10 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb, peer = call->peer; if (mtu < peer->maxdata) { - spin_lock_bh(&peer->lock); + spin_lock(&peer->lock); peer->maxdata = mtu; peer->mtu = mtu + peer->hdrsize; - spin_unlock_bh(&peer->lock); + spin_unlock(&peer->lock); } if (wake) diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 2ea1fa1b8a6f..e5d715b855fc 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -286,9 +286,9 @@ void rxrpc_transmit_ack_packets(struct rxrpc_local *local) if (list_empty(&local->ack_tx_queue)) return; - spin_lock_bh(&local->ack_tx_lock); + spin_lock(&local->ack_tx_lock); list_splice_tail_init(&local->ack_tx_queue, &queue); - spin_unlock_bh(&local->ack_tx_lock); + spin_unlock(&local->ack_tx_lock); while (!list_empty(&queue)) { struct rxrpc_txbuf *txb = @@ -296,9 +296,9 @@ void rxrpc_transmit_ack_packets(struct rxrpc_local *local) ret = rxrpc_send_ack_packet(local, txb); if (ret < 0 && ret != -ECONNRESET) { - spin_lock_bh(&local->ack_tx_lock); + spin_lock(&local->ack_tx_lock); list_splice_init(&queue, &local->ack_tx_queue); - spin_unlock_bh(&local->ack_tx_lock); + spin_unlock(&local->ack_tx_lock); break; } diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c index fb8096e93d2c..6685bf917aa6 100644 --- a/net/rxrpc/peer_event.c +++ b/net/rxrpc/peer_event.c @@ -121,10 +121,10 @@ static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu) } if (mtu < peer->mtu) { - spin_lock_bh(&peer->lock); + spin_lock(&peer->lock); peer->mtu = mtu; peer->maxdata = peer->mtu - peer->hdrsize; - spin_unlock_bh(&peer->lock); + spin_unlock(&peer->lock); } } @@ -237,7 +237,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, time64_t keepalive_at; int slot; - spin_lock_bh(&rxnet->peer_hash_lock); + spin_lock(&rxnet->peer_hash_lock); while (!list_empty(collector)) { peer = list_entry(collector->next, @@ -248,7 +248,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, continue; if (__rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive)) { - spin_unlock_bh(&rxnet->peer_hash_lock); + spin_unlock(&rxnet->peer_hash_lock); keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME; slot = keepalive_at - base; @@ -267,7 +267,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, */ slot += cursor; slot &= mask; - spin_lock_bh(&rxnet->peer_hash_lock); + spin_lock(&rxnet->peer_hash_lock); list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive[slot & mask]); rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive); @@ -275,7 +275,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet, rxrpc_put_peer_locked(peer, rxrpc_peer_put_keepalive); } - spin_unlock_bh(&rxnet->peer_hash_lock); + spin_unlock(&rxnet->peer_hash_lock); } /* @@ -305,7 +305,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work) * second; the bucket at cursor + 1 goes at now + 1s and so * on... */ - spin_lock_bh(&rxnet->peer_hash_lock); + spin_lock(&rxnet->peer_hash_lock); list_splice_init(&rxnet->peer_keepalive_new, &collector); stop = cursor + ARRAY_SIZE(rxnet->peer_keepalive); @@ -317,7 +317,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work) } base = now; - spin_unlock_bh(&rxnet->peer_hash_lock); + spin_unlock(&rxnet->peer_hash_lock); rxnet->peer_keepalive_base = base; rxnet->peer_keepalive_cursor = cursor; diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c index 9e682a60a800..608946dcc505 100644 --- a/net/rxrpc/peer_object.c +++ b/net/rxrpc/peer_object.c @@ -349,7 +349,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, return NULL; } - spin_lock_bh(&rxnet->peer_hash_lock); + spin_lock(&rxnet->peer_hash_lock); /* Need to check that we aren't racing with someone else */ peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key); @@ -362,7 +362,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, &rxnet->peer_keepalive_new); } - spin_unlock_bh(&rxnet->peer_hash_lock); + spin_unlock(&rxnet->peer_hash_lock); if (peer) rxrpc_free_peer(candidate); @@ -412,10 +412,10 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer) ASSERT(hlist_empty(&peer->error_targets)); - spin_lock_bh(&rxnet->peer_hash_lock); + spin_lock(&rxnet->peer_hash_lock); hash_del_rcu(&peer->hash_link); list_del_init(&peer->keepalive_link); - spin_unlock_bh(&rxnet->peer_hash_lock); + spin_unlock(&rxnet->peer_hash_lock); rxrpc_free_peer(peer); } diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 77d03b9e4c4c..3a8576e9daf3 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -36,16 +36,16 @@ void rxrpc_notify_socket(struct rxrpc_call *call) sk = &rx->sk; if (rx && sk->sk_state < RXRPC_CLOSE) { if (call->notify_rx) { - spin_lock_bh(&call->notify_lock); + spin_lock(&call->notify_lock); call->notify_rx(sk, call, call->user_call_ID); - spin_unlock_bh(&call->notify_lock); + spin_unlock(&call->notify_lock); } else { - write_lock_bh(&rx->recvmsg_lock); + write_lock(&rx->recvmsg_lock); if (list_empty(&call->recvmsg_link)) { rxrpc_get_call(call, rxrpc_call_get_notify_socket); list_add_tail(&call->recvmsg_link, &rx->recvmsg_q); } - write_unlock_bh(&rx->recvmsg_lock); + write_unlock(&rx->recvmsg_lock); if (!sock_flag(sk, SOCK_DEAD)) { _debug("call %ps", sk->sk_data_ready); @@ -87,9 +87,9 @@ bool rxrpc_set_call_completion(struct rxrpc_call *call, bool ret = false; if (call->state < RXRPC_CALL_COMPLETE) { - write_lock_bh(&call->state_lock); + write_lock(&call->state_lock); ret = __rxrpc_set_call_completion(call, compl, abort_code, error); - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); } return ret; } @@ -107,9 +107,9 @@ bool rxrpc_call_completed(struct rxrpc_call *call) bool ret = false; if (call->state < RXRPC_CALL_COMPLETE) { - write_lock_bh(&call->state_lock); + write_lock(&call->state_lock); ret = __rxrpc_call_completed(call); - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); } return ret; } @@ -131,9 +131,9 @@ bool rxrpc_abort_call(const char *why, struct rxrpc_call *call, { bool ret; - write_lock_bh(&call->state_lock); + write_lock(&call->state_lock); ret = __rxrpc_abort_call(why, call, seq, abort_code, error); - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); return ret; } @@ -193,23 +193,23 @@ static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) if (call->state == RXRPC_CALL_CLIENT_RECV_REPLY) rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_terminal_ack); - write_lock_bh(&call->state_lock); + write_lock(&call->state_lock); switch (call->state) { case RXRPC_CALL_CLIENT_RECV_REPLY: __rxrpc_call_completed(call); - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); break; case RXRPC_CALL_SERVER_RECV_REQUEST: call->state = RXRPC_CALL_SERVER_ACK_REQUEST; call->expect_req_by = jiffies + MAX_JIFFY_OFFSET; - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_processing_op); break; default: - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); break; } } @@ -442,14 +442,14 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, /* Find the next call and dequeue it if we're not just peeking. If we * do dequeue it, that comes with a ref that we will need to release. */ - write_lock_bh(&rx->recvmsg_lock); + write_lock(&rx->recvmsg_lock); l = rx->recvmsg_q.next; call = list_entry(l, struct rxrpc_call, recvmsg_link); if (!(flags & MSG_PEEK)) list_del_init(&call->recvmsg_link); else rxrpc_get_call(call, rxrpc_call_get_recvmsg); - write_unlock_bh(&rx->recvmsg_lock); + write_unlock(&rx->recvmsg_lock); trace_rxrpc_recvmsg(call, rxrpc_recvmsg_dequeue, 0); @@ -538,9 +538,9 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, error_requeue_call: if (!(flags & MSG_PEEK)) { - write_lock_bh(&rx->recvmsg_lock); + write_lock(&rx->recvmsg_lock); list_add(&call->recvmsg_link, &rx->recvmsg_q); - write_unlock_bh(&rx->recvmsg_lock); + write_unlock(&rx->recvmsg_lock); trace_rxrpc_recvmsg(call, rxrpc_recvmsg_requeue, 0); } else { rxrpc_put_call(call, rxrpc_call_put_recvmsg); diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 58e0a36f6aa9..2c861c55ed70 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -195,7 +195,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, if (last || call->state == RXRPC_CALL_SERVER_ACK_REQUEST) { _debug("________awaiting reply/ACK__________"); - write_lock_bh(&call->state_lock); + write_lock(&call->state_lock); switch (call->state) { case RXRPC_CALL_CLIENT_SEND_REQUEST: call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY; @@ -218,7 +218,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, default: break; } - write_unlock_bh(&call->state_lock); + write_unlock(&call->state_lock); } if (poke) @@ -357,10 +357,10 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, success: ret = copied; if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) { - read_lock_bh(&call->state_lock); + read_lock(&call->state_lock); if (call->error < 0) ret = call->error; - read_unlock_bh(&call->state_lock); + read_unlock(&call->state_lock); } out: call->tx_pending = txb; @@ -725,9 +725,9 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, notify_end_tx, &dropped_lock); break; case RXRPC_CALL_COMPLETE: - read_lock_bh(&call->state_lock); + read_lock(&call->state_lock); ret = call->error; - read_unlock_bh(&call->state_lock); + read_unlock(&call->state_lock); break; default: /* Request phase complete for this client call */ From patchwork Wed Nov 30 16:58:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27908 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042676wrr; Wed, 30 Nov 2022 09:05:49 -0800 (PST) X-Google-Smtp-Source: AA0mqf7Q5oLBrxV1XZbpdk3rXT37fz2SRC+ErgSLwiY+OYLDGZYCjWSvbj/vN2Zj0Zo1bdBDSV13 X-Received: by 2002:a17:906:5591:b0:7c0:7ab5:ed72 with SMTP id y17-20020a170906559100b007c07ab5ed72mr11649127ejp.328.1669827948962; Wed, 30 Nov 2022 09:05:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827948; cv=none; d=google.com; s=arc-20160816; b=VY1HlqeO2jqJRs2plhQ3L4N7REESOSx/DwfBd0ZBz/ZdtAl6ip6DAeGANwbkktL/U9 JBX7UEq9lFkrGzxjHrpZXadILjMcQ+4Dsk9LLn6gtIQA5/JGnXB6ccmytkFJKkVf3swX az4/SEBEoWf0ra2lSER2/WPaBXv4SLeO325W+0REOofF6RVK94owGCAoLsk7+Elfb1Nb sF65p0auWLGaQfGh5QArsRPE6RmP8h6JhjOf9H0ZRxYy64frLehNVstyGq18pn011LZe QjkBih5RQwh/t5D8KPpLQiOUKWfWH9xWEHYzk8CwuUop2SSRIgkopv0+xpB2dZlh2wvr Tbfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=VtdYwukbYgl0UpjEXIEIAmSWA1Q9LhbOt+7IbY0H2pY=; b=ow9tWCoUfRMmxp7mfnNoqaI+A1/be4qXn8wdd2BuyXLWWel0KFxlEEem9g/ijCFlPJ Hlvsuues/6jQVwyq7T5vA/5GLP181XnRmRGiyqgUci88TKLdK5wkwCS3nV0UpwwUcJNw hFYhMbK2/EOMVsWUp1fDfsrvF6JZi2FYy54S5MP5mUfgtRJF/zWB1VrrzqytPPXcxv8l ZTRP+G4ko5KPUsZlKUPDjZU6Clc0DV2NsEv3wjCqpl3PXKJu4onKtYD0tnxPsjI7fzGt pl3VTUaF1jFR5qRJz7C0gE+mvdNZi5ta+8YVyGPIQ3d6/ERdgNzsJTznwayN+8oS3i2w Dbyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HLy3M8uU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ga34-20020a1709070c2200b007c079eed73csi1718803ejc.416.2022.11.30.09.05.25; Wed, 30 Nov 2022 09:05:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HLy3M8uU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231166AbiK3REa (ORCPT + 99 others); Wed, 30 Nov 2022 12:04:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229840AbiK3RDi (ORCPT ); Wed, 30 Nov 2022 12:03:38 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0523A920AC for ; Wed, 30 Nov 2022 08:59:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827542; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VtdYwukbYgl0UpjEXIEIAmSWA1Q9LhbOt+7IbY0H2pY=; b=HLy3M8uUj5osBGDWxIP+blf34bTvPDC6/u3Bjvk78KyBe9+4TiGtqwzD/+ZhgA1WPXjukP rn45DvVwD5W51DDIE/f//GRkrWmnKHx+5wsu1nsad3jqRrq8Ri2g6V0eGTdV4Rh6SpHSRj tbuoOdpH6k7vwcuYd11pgUKeDPk/Cyw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-531-nlARyOf7OIC-DcKVxT0EPw-1; Wed, 30 Nov 2022 11:58:59 -0500 X-MC-Unique: nlARyOf7OIC-DcKVxT0EPw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 821BC38173D3; Wed, 30 Nov 2022 16:58:58 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id C3041492CA2; Wed, 30 Nov 2022 16:58:57 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 32/35] rxrpc: Trace/count transmission underflows and cwnd resets From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:58:55 +0000 Message-ID: <166982753502.621383.18092511902509739439.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941511263683388?= X-GMAIL-MSGID: =?utf-8?q?1750941511263683388?= Add a tracepoint to log when a cwnd reset occurs due to lack of transmission on a call. Add stat counters to count transmission underflows (ie. when we have tx window space, but sendmsg doesn't manage to keep up), cwnd resets and transmission failures. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 38 ++++++++++++++++++++++++++++++++++++++ net/rxrpc/ar-internal.h | 3 +++ net/rxrpc/call_event.c | 4 +++- net/rxrpc/input.c | 7 +++++-- net/rxrpc/output.c | 2 ++ net/rxrpc/proc.c | 14 ++++++++++---- 6 files changed, 61 insertions(+), 7 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index c49b0c233594..b41e913ae78a 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -1446,6 +1446,44 @@ TRACE_EVENT(rxrpc_congest, __entry->sum.retrans_timeo ? " rTxTo" : "") ); +TRACE_EVENT(rxrpc_reset_cwnd, + TP_PROTO(struct rxrpc_call *call, ktime_t now), + + TP_ARGS(call, now), + + TP_STRUCT__entry( + __field(unsigned int, call ) + __field(enum rxrpc_congest_mode, mode ) + __field(unsigned short, cwnd ) + __field(unsigned short, extra ) + __field(rxrpc_seq_t, hard_ack ) + __field(rxrpc_seq_t, prepared ) + __field(ktime_t, since_last_tx ) + __field(bool, has_data ) + ), + + TP_fast_assign( + __entry->call = call->debug_id; + __entry->mode = call->cong_mode; + __entry->cwnd = call->cong_cwnd; + __entry->extra = call->cong_extra; + __entry->hard_ack = call->acks_hard_ack; + __entry->prepared = call->tx_prepared - call->tx_bottom; + __entry->since_last_tx = ktime_sub(now, call->tx_last_sent); + __entry->has_data = !list_empty(&call->tx_sendmsg); + ), + + TP_printk("c=%08x q=%08x %s cw=%u+%u pr=%u tm=%llu d=%u", + __entry->call, + __entry->hard_ack, + __print_symbolic(__entry->mode, rxrpc_congest_modes), + __entry->cwnd, + __entry->extra, + __entry->prepared, + ktime_to_ns(__entry->since_last_tx), + __entry->has_data) + ); + TRACE_EVENT(rxrpc_disconnect_call, TP_PROTO(struct rxrpc_call *call), diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 6b993a3d4186..6cfe366ee224 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -101,6 +101,9 @@ struct rxrpc_net { atomic_t stat_tx_data_retrans; atomic_t stat_tx_data_send; atomic_t stat_tx_data_send_frag; + atomic_t stat_tx_data_send_fail; + atomic_t stat_tx_data_underflow; + atomic_t stat_tx_data_cwnd_reset; atomic_t stat_rx_data; atomic_t stat_rx_data_reqack; atomic_t stat_rx_data_jumbo; diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 18591f9ecc6a..9f1e490ab976 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -317,8 +317,10 @@ static void rxrpc_transmit_some_data(struct rxrpc_call *call) case RXRPC_CALL_CLIENT_AWAIT_REPLY: if (!rxrpc_tx_window_has_space(call)) return; - if (list_empty(&call->tx_sendmsg)) + if (list_empty(&call->tx_sendmsg)) { + rxrpc_inc_stat(call->rxnet, stat_tx_data_underflow); return; + } rxrpc_decant_prepared_tx(call); break; default: diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index dd2ac5d55e1c..2988e3d0c1f6 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -27,6 +27,7 @@ static void rxrpc_congestion_management(struct rxrpc_call *call, enum rxrpc_congest_change change = rxrpc_cong_no_change; unsigned int cumulative_acks = call->cong_cumul_acks; unsigned int cwnd = call->cong_cwnd; + ktime_t now; bool resend = false; summary->flight_size = @@ -59,13 +60,15 @@ static void rxrpc_congestion_management(struct rxrpc_call *call, /* If we haven't transmitted anything for >1RTT, we should reset the * congestion management state. */ + now = ktime_get_real(); if ((call->cong_mode == RXRPC_CALL_SLOW_START || call->cong_mode == RXRPC_CALL_CONGEST_AVOIDANCE) && ktime_before(ktime_add_us(call->tx_last_sent, - call->peer->srtt_us >> 3), - ktime_get_real()) + call->peer->srtt_us >> 3), now) ) { + trace_rxrpc_reset_cwnd(call, now); change = rxrpc_cong_idle_reset; + rxrpc_inc_stat(call->rxnet, stat_tx_data_cwnd_reset); summary->mode = RXRPC_CALL_SLOW_START; if (RXRPC_TX_SMSS > 2190) summary->cwnd = 2; diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index e5d715b855fc..8147a47d1702 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -485,6 +485,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) up_read(&conn->local->defrag_sem); if (ret < 0) { + rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); rxrpc_cancel_rtt_probe(call, serial, rtt_slot); trace_rxrpc_tx_fail(call->debug_id, serial, ret, rxrpc_tx_point_call_data_nofrag); @@ -567,6 +568,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } if (ret < 0) { + rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); rxrpc_cancel_rtt_probe(call, serial, rtt_slot); trace_rxrpc_tx_fail(call->debug_id, serial, ret, rxrpc_tx_point_call_data_frag); diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index 5af7c8ee4b1a..6816934cb4cf 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -398,13 +398,16 @@ int rxrpc_stats_show(struct seq_file *seq, void *v) struct rxrpc_net *rxnet = rxrpc_net(seq_file_single_net(seq)); seq_printf(seq, - "Data : send=%u sendf=%u\n", + "Data : send=%u sendf=%u fail=%u\n", atomic_read(&rxnet->stat_tx_data_send), - atomic_read(&rxnet->stat_tx_data_send_frag)); + atomic_read(&rxnet->stat_tx_data_send_frag), + atomic_read(&rxnet->stat_tx_data_send_fail)); seq_printf(seq, - "Data-Tx : nr=%u retrans=%u\n", + "Data-Tx : nr=%u retrans=%u uf=%u cwr=%u\n", atomic_read(&rxnet->stat_tx_data), - atomic_read(&rxnet->stat_tx_data_retrans)); + atomic_read(&rxnet->stat_tx_data_retrans), + atomic_read(&rxnet->stat_tx_data_underflow), + atomic_read(&rxnet->stat_tx_data_cwnd_reset)); seq_printf(seq, "Data-Rx : nr=%u reqack=%u jumbo=%u\n", atomic_read(&rxnet->stat_rx_data), @@ -472,8 +475,11 @@ int rxrpc_stats_clear(struct file *file, char *buf, size_t size) atomic_set(&rxnet->stat_tx_data, 0); atomic_set(&rxnet->stat_tx_data_retrans, 0); + atomic_set(&rxnet->stat_tx_data_underflow, 0); + atomic_set(&rxnet->stat_tx_data_cwnd_reset, 0); atomic_set(&rxnet->stat_tx_data_send, 0); atomic_set(&rxnet->stat_tx_data_send_frag, 0); + atomic_set(&rxnet->stat_tx_data_send_fail, 0); atomic_set(&rxnet->stat_rx_data, 0); atomic_set(&rxnet->stat_rx_data_reqack, 0); atomic_set(&rxnet->stat_rx_data_jumbo, 0); From patchwork Wed Nov 30 16:59:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27911 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1042995wrr; Wed, 30 Nov 2022 09:06:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf6Cw28ERyl0rebGafeQYyck6pO+oYDE4MDNH2NWfqZFY0k8XCAijMb0XJh1KhES0t/tPRFj X-Received: by 2002:a17:906:8156:b0:7c0:8fe9:cd0a with SMTP id z22-20020a170906815600b007c08fe9cd0amr6052645ejw.348.1669827975136; Wed, 30 Nov 2022 09:06:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827975; cv=none; d=google.com; s=arc-20160816; b=MEuDE4pkbXfygAR6YBnqXvIcbT1zOOh4AzdZldm5CgjyYY26UeupSIsNAKXflD9q16 0HhZ0ccaN6GzkmFyFM+dMGG3D6AU6W6VDhJlZ+zdfXcj5yj2pLDa1BZ3Z8WafpQUoJyK 1uQvyBmvkxLYFq8sGxBsXu5OINiUdGMIRFE/24lGWTq/rybS6S2Hkrv8ghBMGsRxrtKI TSdm2Md+ffWxtlPAz/aH4I2J3cZKURRxypsYV4C9uibwdq9otP/99BGizrejAzSsvj2L U7mfbtiaCvjN8QYrPpEoWkoI71pyM/V1/l4lWR/uDEYnjJWRZFSIVL9UfNqtsqsJ3VG5 MDKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=aq4DkxpZC2p4fY6FSUmjQGcB/Am9rAKQ2JZKLZqwp+8=; b=ghfd8oR576XpTIVi1sVRRObkC7iJWFVKEp4AlW7KBQw4R+E3Ebz9/m7HRNnp8dlQNq RktJKz8Wiqpb/d+9zo+wsvKufKt7zRGViy/IvVeNA/3ywzRymGNmv8XN2yq/4XPLZH4s jl/eOZQniP51yWhAWDUR9rUrphEHKqpuV01UYGjHiAZvYzNYRhyt8WdmYAmhGC1UKYAw 0gGf1PxRRZKjteIOTO0/L99FbSyBhiENQQBsWpOH82b4jY1vgGBccewP/nmN0hZKFxBs vr1RFd8GLO/MCaQ8TZ4PuyfUbM9fbQ4uue1GIZPyrepTwjVUflvKC2JCvT3ykdMagojo rryg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=P20TGRtQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t18-20020a1709063e5200b0077eca9fa444si1358665eji.187.2022.11.30.09.05.50; Wed, 30 Nov 2022 09:06:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=P20TGRtQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231211AbiK3REy (ORCPT + 99 others); Wed, 30 Nov 2022 12:04:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230448AbiK3REM (ORCPT ); Wed, 30 Nov 2022 12:04:12 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB5E08B1AB for ; Wed, 30 Nov 2022 08:59:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827549; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aq4DkxpZC2p4fY6FSUmjQGcB/Am9rAKQ2JZKLZqwp+8=; b=P20TGRtQ2LqCyrEi5FpcZDylhl7QGrr/a0sthhhTc0UicjioqHW9oKxRAmnWvDKtCT9q9k 68QBiBGep5gupRzWiDl3acBQ6JE+/fma+0UgMfmIVobFZebb4rhja4nsIeLqOPfonpO1+i xDyR3iyKg1tSOiy7OnCG4yEUrBVkRm8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-620-I5nMLBL9O5C-4EMW0-zemQ-1; Wed, 30 Nov 2022 11:59:07 -0500 X-MC-Unique: I5nMLBL9O5C-4EMW0-zemQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3933E81114C; Wed, 30 Nov 2022 16:59:07 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 77A7E40C83D9; Wed, 30 Nov 2022 16:59:06 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 33/35] rxrpc: Move the cwnd degradation after transmitting packets From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:59:03 +0000 Message-ID: <166982754368.621383.8337942706520370774.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941538467246649?= X-GMAIL-MSGID: =?utf-8?q?1750941538467246649?= When we've gone for >1RTT without transmitting a packet, we should reduce the ssthresh and cut the cwnd by half (as suggested in RFC2861 sec 3.1). However, we may receive ACK packets in a batch and the first of these may cut the cwnd, preventing further transmission, and each subsequent one cuts the cwnd yet further, reducing it to the floor and killing performance. Fix this by moving the cwnd reset to after doing the transmission and resetting the base time such that we don't cut the cwnd by half again for at least another RTT. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 2 ++ net/rxrpc/call_event.c | 7 +++++++ net/rxrpc/input.c | 49 ++++++++++++++++++++++++++--------------------- net/rxrpc/proc.c | 5 +++-- 4 files changed, 39 insertions(+), 24 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 6cfe366ee224..785cd0dd1eea 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -666,6 +666,7 @@ struct rxrpc_call { * packets) rather than bytes. */ #define RXRPC_TX_SMSS RXRPC_JUMBO_DATALEN +#define RXRPC_MIN_CWND (RXRPC_TX_SMSS > 2190 ? 2 : RXRPC_TX_SMSS > 1095 ? 3 : 4) u8 cong_cwnd; /* Congestion window size */ u8 cong_extra; /* Extra to send for congestion management */ u8 cong_ssthresh; /* Slow-start threshold */ @@ -953,6 +954,7 @@ void rxrpc_unpublish_service_conn(struct rxrpc_connection *); /* * input.c */ +void rxrpc_congestion_degrade(struct rxrpc_call *); void rxrpc_input_call_packet(struct rxrpc_call *, struct sk_buff *); void rxrpc_implicit_end_call(struct rxrpc_call *, struct sk_buff *); diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 9f1e490ab976..fd122e3726bd 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -427,6 +427,13 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_transmit_some_data(call); + if (skb) { + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + + if (sp->hdr.type == RXRPC_PACKET_TYPE_ACK) + rxrpc_congestion_degrade(call); + } + if (test_and_clear_bit(RXRPC_CALL_EV_INITIAL_PING, &call->events)) rxrpc_send_initial_ping(call); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 2988e3d0c1f6..d0e20e946e48 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -27,7 +27,6 @@ static void rxrpc_congestion_management(struct rxrpc_call *call, enum rxrpc_congest_change change = rxrpc_cong_no_change; unsigned int cumulative_acks = call->cong_cumul_acks; unsigned int cwnd = call->cong_cwnd; - ktime_t now; bool resend = false; summary->flight_size = @@ -57,27 +56,6 @@ static void rxrpc_congestion_management(struct rxrpc_call *call, summary->cumulative_acks = cumulative_acks; summary->dup_acks = call->cong_dup_acks; - /* If we haven't transmitted anything for >1RTT, we should reset the - * congestion management state. - */ - now = ktime_get_real(); - if ((call->cong_mode == RXRPC_CALL_SLOW_START || - call->cong_mode == RXRPC_CALL_CONGEST_AVOIDANCE) && - ktime_before(ktime_add_us(call->tx_last_sent, - call->peer->srtt_us >> 3), now) - ) { - trace_rxrpc_reset_cwnd(call, now); - change = rxrpc_cong_idle_reset; - rxrpc_inc_stat(call->rxnet, stat_tx_data_cwnd_reset); - summary->mode = RXRPC_CALL_SLOW_START; - if (RXRPC_TX_SMSS > 2190) - summary->cwnd = 2; - else if (RXRPC_TX_SMSS > 1095) - summary->cwnd = 3; - else - summary->cwnd = 4; - } - switch (call->cong_mode) { case RXRPC_CALL_SLOW_START: if (summary->saw_nacks) @@ -197,6 +175,33 @@ static void rxrpc_congestion_management(struct rxrpc_call *call, goto out_no_clear_ca; } +/* + * Degrade the congestion window if we haven't transmitted a packet for >1RTT. + */ +void rxrpc_congestion_degrade(struct rxrpc_call *call) +{ + ktime_t rtt, now; + + if (call->cong_mode != RXRPC_CALL_SLOW_START && + call->cong_mode != RXRPC_CALL_CONGEST_AVOIDANCE) + return; + if (call->state == RXRPC_CALL_CLIENT_AWAIT_REPLY) + return; + + rtt = ns_to_ktime(call->peer->srtt_us * (1000 / 8)); + now = ktime_get_real(); + if (!ktime_before(ktime_add(call->tx_last_sent, rtt), now)) + return; + + trace_rxrpc_reset_cwnd(call, now); + rxrpc_inc_stat(call->rxnet, stat_tx_data_cwnd_reset); + call->tx_last_sent = now; + call->cong_mode = RXRPC_CALL_SLOW_START; + call->cong_ssthresh = max_t(unsigned int, call->cong_ssthresh, + call->cong_cwnd * 3 / 4); + call->cong_cwnd = max_t(unsigned int, call->cong_cwnd / 2, RXRPC_MIN_CWND); +} + /* * Apply a hard ACK by advancing the Tx window. */ diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index 6816934cb4cf..3a59591ec061 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -61,7 +61,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) "Proto Local " " Remote " " SvID ConnID CallID End Use State Abort " - " DebugId TxSeq TW RxSeq RW RxSerial RxTimo\n"); + " DebugId TxSeq TW RxSeq RW RxSerial CW RxTimo\n"); return 0; } @@ -84,7 +84,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) wtmp = atomic64_read_acquire(&call->ackr_window); seq_printf(seq, "UDP %-47.47s %-47.47s %4x %08x %08x %s %3u" - " %-8.8s %08x %08x %08x %02x %08x %02x %08x %06lx\n", + " %-8.8s %08x %08x %08x %02x %08x %02x %08x %02x %06lx\n", lbuff, rbuff, call->dest_srx.srx_service, @@ -98,6 +98,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) acks_hard_ack, READ_ONCE(call->tx_top) - acks_hard_ack, lower_32_bits(wtmp), upper_32_bits(wtmp) - lower_32_bits(wtmp), call->rx_serial, + call->cong_cwnd, timeout); return 0; From patchwork Wed Nov 30 16:59:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27912 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1043051wrr; Wed, 30 Nov 2022 09:06:19 -0800 (PST) X-Google-Smtp-Source: AA0mqf7qTb4mRg6k48poE/ByTXKi9SQ4r+0ncx7EAUmXWeCldj8bgo2lYEXZjqe5PoiYfWxBHzSq X-Received: by 2002:a17:906:fcc2:b0:78d:8b75:b161 with SMTP id qx2-20020a170906fcc200b0078d8b75b161mr54922798ejb.385.1669827979315; Wed, 30 Nov 2022 09:06:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827979; cv=none; d=google.com; s=arc-20160816; b=dLR8Qu26rPSYcYBXq8Eu/UVHpbY64ECxk/S9lYckOf8BjNxVi3RG7TSAB4gydiw5BE s4AglsPrTrOn91o6NwsYu1eOTEllUzWAgpsCSwBBuPAWVwQFCKrKWBidQBXEB02C7kvZ 52PfOSE2FQSbmxRGE5qLWd3dN9/cqlJre3N9FCZ8GMJaJN6aJw/7g3Mka3p7xcGsrETr HKIErCn4EALpbHXTgjkLabaW1JY5UJHycuWHbj8d7pDprJ2bSzhTENgkXEaJ5bkbrEJd N9mW1BBYSggEsVkmTO3Hp5TXPLKdphKzl5S9ugM9j1EQ3pF+BC2nbb1zK3f0InvzrSOe yq3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=wDjd7O4j2Jorwv9GTTfqJk00CzdcoDbQL3nyEA34Spg=; b=iG3qiWdvSwMbg/ubOhMtoTaU6xUfLcSUfSBiFXBPWhk2hrLPloRCE+waarHdmCP/uN wK0ww7e9PUCFjaklH3DOFiSnYVyDATB11RZLXyIQe0vRAs3/T5iwVktM7SNOvTbkpewB bQLKXBMX5QkccyVEInjIg7E6fcq06k9SmnIqcCPMUdb0/SmuiGUIuKaSyk3iNky1sEZz Nb+NgQfmxFQo7nNpR5S/sBZEXwD1I0bliEmv0FXip1qCcz21VUnGXGbMeTmxdA5DYJHS 2cAk3fmXeK6VUhKRIX6glWLS7DE1xh1YYvVbliKJxZ3NHfnNmo0ELQV6Cvb2Cqyen8/x KUyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=d7qdE7ZL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sd32-20020a1709076e2000b007be97d37b29si1883389ejc.104.2022.11.30.09.05.55; Wed, 30 Nov 2022 09:06:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=d7qdE7ZL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231193AbiK3RFU (ORCPT + 99 others); Wed, 30 Nov 2022 12:05:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230467AbiK3REg (ORCPT ); Wed, 30 Nov 2022 12:04:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68F7F8B1B5 for ; Wed, 30 Nov 2022 08:59:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827557; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wDjd7O4j2Jorwv9GTTfqJk00CzdcoDbQL3nyEA34Spg=; b=d7qdE7ZL0/Cpx0wcOqnmWyBVXKARBMMVjianAJxQk8km6Et9uN879CA3607ac2dIVJQlcB kOu+klyCaZEWX7szXRYfqPyPA13AaNvZ3u66v9OmYLOVEVhzTA1aQZJ6goEr3TRVgJqneo vfgjuINS/5df+xg2KdHs/L5wRo/PyLk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-537-h9kWQfU5NiizHn2uTBZvSg-1; Wed, 30 Nov 2022 11:59:16 -0500 X-MC-Unique: h9kWQfU5NiizHn2uTBZvSg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C4F3C185A794; Wed, 30 Nov 2022 16:59:15 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 29E55492B04; Wed, 30 Nov 2022 16:59:15 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 34/35] rxrpc: Fold __rxrpc_unuse_local() into rxrpc_unuse_local() From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:59:12 +0000 Message-ID: <166982755238.621383.17371127181867533147.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941543647229050?= X-GMAIL-MSGID: =?utf-8?q?1750941543647229050?= Fold __rxrpc_unuse_local() into rxrpc_unuse_local() as the latter is now the only user of the former. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 12 ------------ net/rxrpc/local_object.c | 12 ++++++++++-- 2 files changed, 10 insertions(+), 14 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 785cd0dd1eea..2a4928249a64 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -1002,18 +1002,6 @@ void rxrpc_unuse_local(struct rxrpc_local *, enum rxrpc_local_trace); void rxrpc_destroy_local(struct rxrpc_local *local); void rxrpc_destroy_all_locals(struct rxrpc_net *); -static inline bool __rxrpc_unuse_local(struct rxrpc_local *local, - enum rxrpc_local_trace why) -{ - unsigned int debug_id = local->debug_id; - int r, u; - - r = refcount_read(&local->ref); - u = atomic_dec_return(&local->active_users); - trace_rxrpc_local(debug_id, why, r, u); - return u == 0; -} - static inline bool __rxrpc_use_local(struct rxrpc_local *local, enum rxrpc_local_trace why) { diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index c73a5a1bc088..1e994a83db2b 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -359,8 +359,16 @@ struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local, */ void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why) { - if (local && __rxrpc_unuse_local(local, why)) - kthread_stop(local->io_thread); + unsigned int debug_id = local->debug_id; + int r, u; + + if (local) { + r = refcount_read(&local->ref); + u = atomic_dec_return(&local->active_users); + trace_rxrpc_local(debug_id, why, r, u); + if (u == 0) + kthread_stop(local->io_thread); + } } /* From patchwork Wed Nov 30 16:59:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 27913 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1043137wrr; Wed, 30 Nov 2022 09:06:25 -0800 (PST) X-Google-Smtp-Source: AA0mqf4hPJ5VT2hrPjbE0Ki3/IShTEUSJEChoPe7rS6zaoMz9OEnUeuptt9D0eSQqr4yKZ/qwoAg X-Received: by 2002:a17:907:c016:b0:7a4:98cc:7c8e with SMTP id ss22-20020a170907c01600b007a498cc7c8emr42756058ejc.48.1669827985824; Wed, 30 Nov 2022 09:06:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669827985; cv=none; d=google.com; s=arc-20160816; b=MFgy0OJprD5Agvdz04/YUfXV5h/9+xDx3aH40YTFledOxGEPfRvDj6bced8Cn7gN63 3h0MulDtFQ9MUOZAllv4xtWbyXg8teDJ1zwjsYL2bWo5GHhQ1o42HC5/OtNxqU61CDJx kKY1w320P3PZgzscy3tcFgpXQiQADHNqeNgmKuXdA4aApEOZalwQo3ibZoTnQ+lhRWzX k9yDbLtm1T7omgoddHTfcxfHP4sBHScbUTyv4ePCiRDeUVUJwhf3UCpBgKvoFmKzSJ2x myWlNeIkXG09EndE6ut/XQ6KxOkAo+k/pbGmh8rfmMnEweWCuxJ/dGt7HAB+ro0mMy2g ky2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=2y2eQYmgW93Oe8z+N6AtyddIxcUSht93G9L7qmS2NKY=; b=qexVhBkS6zYMCJXlcjMrQ90S/vcwE7h3dbR3ym69JzXcmT0HNl8gVkIL10B1h4HEb5 fLqFdGewraBmoJIJ5xhDqQrZLGrVU01ILtc7rmfPJNR9P62Ze2WVaCB7oN7jkw2d1i7J KhNnVVAPx5IzDB+vwvmZT3dFtmuZh2Efsq/sGRxP20owqnKKSF5wEYdKWez2Gf5+Cpry tXvo9uR9ua5MmnIqs8/Qx6zOz1KBP5SAx7toAJr2rpDSJK7tx24ByIxVQS535j9qgjrd AtycKt5vsrKwxJIFCFfymhrnR7Xi7zNrHYImRu2GIj3k1Pf1MGYBeD0HN9Sd9Y5VWOCS bXmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NkbhvjC1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b11-20020a056402278b00b0046a6f32bbdbsi1861320ede.283.2022.11.30.09.06.00; Wed, 30 Nov 2022 09:06:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NkbhvjC1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231208AbiK3RFZ (ORCPT + 99 others); Wed, 30 Nov 2022 12:05:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231201AbiK3REl (ORCPT ); Wed, 30 Nov 2022 12:04:41 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84E6193A4F for ; Wed, 30 Nov 2022 08:59:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669827566; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2y2eQYmgW93Oe8z+N6AtyddIxcUSht93G9L7qmS2NKY=; b=NkbhvjC1pMnRrr1STof8Plfy+fbJx9VdRGeV1395bRGSCCJypenAl335AcWnQGg3C/zxgi 44OECyfXepiUDHNj8Epfc/oGq8+rmirrVlwkfFFDYQTvfop4asMG+HKBTv6vkROyY7BKbQ h/TiLhu/5o2x3+XFTPx+Ae7qTeqsS+w= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-202-9mhrbNkgN3mnfUeguO8gKA-1; Wed, 30 Nov 2022 11:59:24 -0500 X-MC-Unique: 9mhrbNkgN3mnfUeguO8gKA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0939B38173CC; Wed, 30 Nov 2022 16:59:24 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4ADBB492B04; Wed, 30 Nov 2022 16:59:23 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 35/35] rxrpc: Transmit ACKs at the point of generation From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Wed, 30 Nov 2022 16:59:20 +0000 Message-ID: <166982756095.621383.1390522561920223769.stgit@warthog.procyon.org.uk> In-Reply-To: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> References: <166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750941549816276126?= X-GMAIL-MSGID: =?utf-8?q?1750941549816276126?= For ACKs generated inside the I/O thread, transmit the ACK at the point of generation. Where the ACK is generated outside of the I/O thread, it's offloaded to the I/O thread to transmit it. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 3 --- net/rxrpc/ar-internal.h | 5 +---- net/rxrpc/call_event.c | 17 ++--------------- net/rxrpc/io_thread.c | 5 ----- net/rxrpc/local_object.c | 2 -- net/rxrpc/output.c | 42 ++---------------------------------------- net/rxrpc/recvmsg.c | 3 --- net/rxrpc/sendmsg.c | 2 -- net/rxrpc/txbuf.c | 1 - 9 files changed, 5 insertions(+), 75 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index b41e913ae78a..049b52e7aa6a 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -63,7 +63,6 @@ EM(rxrpc_local_put_peer, "PUT peer ") \ EM(rxrpc_local_put_prealloc_conn, "PUT conn-pre") \ EM(rxrpc_local_put_release_sock, "PUT rel-sock") \ - EM(rxrpc_local_see_tx_ack, "SEE tx-ack ") \ EM(rxrpc_local_stop, "STOP ") \ EM(rxrpc_local_stopped, "STOPPED ") \ EM(rxrpc_local_unuse_bind, "UNU bind ") \ @@ -156,7 +155,6 @@ EM(rxrpc_call_get_recvmsg, "GET recvmsg ") \ EM(rxrpc_call_get_release_sock, "GET rel-sock") \ EM(rxrpc_call_get_sendmsg, "GET sendmsg ") \ - EM(rxrpc_call_get_send_ack, "GET send-ack") \ EM(rxrpc_call_get_userid, "GET user-id ") \ EM(rxrpc_call_new_client, "NEW client ") \ EM(rxrpc_call_new_prealloc_service, "NEW prealloc") \ @@ -168,7 +166,6 @@ EM(rxrpc_call_put_recvmsg, "PUT recvmsg ") \ EM(rxrpc_call_put_release_sock, "PUT rls-sock") \ EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \ - EM(rxrpc_call_put_send_ack, "PUT send-ack") \ EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \ EM(rxrpc_call_put_unnotify, "PUT unnotify") \ EM(rxrpc_call_put_userid_exists, "PUT u-exists") \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 2a4928249a64..e7dccab7b741 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -287,8 +287,6 @@ struct rxrpc_local { struct hlist_node link; struct socket *socket; /* my UDP socket */ struct task_struct *io_thread; - struct list_head ack_tx_queue; /* List of ACKs that need sending */ - spinlock_t ack_tx_lock; /* ACK list lock */ struct rxrpc_sock __rcu *service; /* Service(s) listening on this endpoint */ struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */ struct sk_buff_head rx_queue; /* Received packets */ @@ -762,7 +760,6 @@ struct rxrpc_txbuf { struct rcu_head rcu; struct list_head call_link; /* Link in call->tx_sendmsg/tx_buffer */ struct list_head tx_link; /* Link in live Enc queue or Tx queue */ - struct rxrpc_call *call; /* Call to which belongs */ ktime_t last_sent; /* Time at which last transmitted */ refcount_t ref; rxrpc_seq_t seq; /* Sequence number of this packet */ @@ -1047,7 +1044,7 @@ static inline struct rxrpc_net *rxrpc_net(struct net *net) /* * output.c */ -void rxrpc_transmit_ack_packets(struct rxrpc_local *); +int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb); int rxrpc_send_abort_packet(struct rxrpc_call *); int rxrpc_send_data_packet(struct rxrpc_call *, struct rxrpc_txbuf *); void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb); diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index fd122e3726bd..b2cf448fb02c 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -69,7 +69,6 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial, void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why) { - struct rxrpc_local *local = call->conn->local; struct rxrpc_txbuf *txb; if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) @@ -96,17 +95,9 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, txb->ack.reason = ack_reason; txb->ack.nAcks = 0; - if (!rxrpc_try_get_call(call, rxrpc_call_get_send_ack)) { - rxrpc_put_txbuf(txb, rxrpc_txbuf_put_nomem); - return; - } - - spin_lock(&local->ack_tx_lock); - list_add_tail(&txb->tx_link, &local->ack_tx_queue); - spin_unlock(&local->ack_tx_lock); trace_rxrpc_send_ack(call, why, ack_reason, serial); - - rxrpc_wake_up_io_thread(local); + rxrpc_send_ack_packet(call, txb); + rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx); } /* @@ -294,10 +285,6 @@ static void rxrpc_decant_prepared_tx(struct rxrpc_call *call) rxrpc_transmit_one(call, txb); - // TODO: Drain the transmission buffers. Do this somewhere better - if (after(call->acks_hard_ack, call->tx_bottom + 16)) - rxrpc_shrink_call_tx_buffer(call); - if (!rxrpc_tx_window_has_space(call)) break; } diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 19aa315eddf5..d83ae3193032 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -447,11 +447,6 @@ int rxrpc_io_thread(void *data) continue; } - if (!list_empty(&local->ack_tx_queue)) { - rxrpc_transmit_ack_packets(local); - continue; - } - /* Process received packets and errors. */ if ((skb = __skb_dequeue(&rx_queue))) { switch (skb->mark) { diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 1e994a83db2b..44222923c0d1 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -96,8 +96,6 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, atomic_set(&local->active_users, 1); local->rxnet = rxnet; INIT_HLIST_NODE(&local->link); - INIT_LIST_HEAD(&local->ack_tx_queue); - spin_lock_init(&local->ack_tx_lock); init_rwsem(&local->defrag_sem); skb_queue_head_init(&local->rx_queue); INIT_LIST_HEAD(&local->call_attend_q); diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 8147a47d1702..3d8c9f830ee0 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -203,12 +203,11 @@ static void rxrpc_cancel_rtt_probe(struct rxrpc_call *call, } /* - * Send an ACK call packet. + * Transmit an ACK packet. */ -static int rxrpc_send_ack_packet(struct rxrpc_local *local, struct rxrpc_txbuf *txb) +int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { struct rxrpc_connection *conn; - struct rxrpc_call *call = txb->call; struct msghdr msg; struct kvec iov[1]; rxrpc_serial_t serial; @@ -271,43 +270,6 @@ static int rxrpc_send_ack_packet(struct rxrpc_local *local, struct rxrpc_txbuf * return ret; } -/* - * ACK transmitter for a local endpoint. The UDP socket locks around each - * transmission, so we can only transmit one packet at a time, ACK, DATA or - * otherwise. - */ -void rxrpc_transmit_ack_packets(struct rxrpc_local *local) -{ - LIST_HEAD(queue); - int ret; - - rxrpc_see_local(local, rxrpc_local_see_tx_ack); - - if (list_empty(&local->ack_tx_queue)) - return; - - spin_lock(&local->ack_tx_lock); - list_splice_tail_init(&local->ack_tx_queue, &queue); - spin_unlock(&local->ack_tx_lock); - - while (!list_empty(&queue)) { - struct rxrpc_txbuf *txb = - list_entry(queue.next, struct rxrpc_txbuf, tx_link); - - ret = rxrpc_send_ack_packet(local, txb); - if (ret < 0 && ret != -ECONNRESET) { - spin_lock(&local->ack_tx_lock); - list_splice_init(&queue, &local->ack_tx_queue); - spin_unlock(&local->ack_tx_lock); - break; - } - - list_del_init(&txb->tx_link); - rxrpc_put_call(txb->call, rxrpc_call_put_send_ack); - rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx); - } -} - /* * Send an ABORT call packet. */ diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 3a8576e9daf3..36b25d003cf0 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -320,7 +320,6 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, ret = ret2; goto out; } - rxrpc_transmit_ack_packets(call->peer->local); } else { trace_rxrpc_recvdata(call, rxrpc_recvmsg_cont, seq, rx_pkt_offset, rx_pkt_len, 0); @@ -502,7 +501,6 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, if (ret == -EAGAIN) ret = 0; - rxrpc_transmit_ack_packets(call->peer->local); if (!skb_queue_empty(&call->recvmsg_queue)) rxrpc_notify_socket(call); break; @@ -632,7 +630,6 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call, read_phase_complete: ret = 1; out: - rxrpc_transmit_ack_packets(call->peer->local); if (_service) *_service = call->dest_srx.srx_service; mutex_unlock(&call->user_mutex); diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 2c861c55ed70..9fa7e37f7155 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -276,8 +276,6 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, rxrpc_see_txbuf(txb, rxrpc_txbuf_see_send_more); do { - rxrpc_transmit_ack_packets(call->peer->local); - if (!txb) { size_t remain, bufsize, chunk, offset; diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index a5054389dfbb..d2cf2aac3adb 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -26,7 +26,6 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, INIT_LIST_HEAD(&txb->call_link); INIT_LIST_HEAD(&txb->tx_link); refcount_set(&txb->ref, 1); - txb->call = call; txb->call_debug_id = call->debug_id; txb->debug_id = atomic_inc_return(&rxrpc_txbuf_debug_ids); txb->space = sizeof(txb->data);