From patchwork Tue Dec 6 15:58:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30393 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904012wrr; Tue, 6 Dec 2022 08:02:42 -0800 (PST) X-Google-Smtp-Source: AA0mqf57MX33zpM3uZ9a+bXnzVpNV67A5Hx0JNv0dZBlKFcI7y1H7e2YceGzQlv6yTXI41j8bFcE X-Received: by 2002:a63:ea17:0:b0:477:9a46:f57b with SMTP id c23-20020a63ea17000000b004779a46f57bmr62024668pgi.319.1670342561749; Tue, 06 Dec 2022 08:02:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342561; cv=none; d=google.com; s=arc-20160816; b=elxC6amMepqrOignUKm46EbuN1ASpVdKeRXa9CL/q5hmnhX50x4FbIckzh+aCphlMT Q/smo0YzfRaFUwDOPlxrnEB41TXvH/ghcZ1s0EEzIFkPWXFynpMqH9yuvhoSVFcyjl/J /2DRMEEydCJb/D5bYRm8v8jeAbjNGeeBiNBLJZUbtIq/d9FzmxuWt0ciq3gY015Tucnc lXjaK7iuxidlJhDVuQ8HrbGNceeniR0y6YdS5MOIIdfmG3LP/Eyx5sYi8b+1opp96wuj ogYXHaE+N3y25zPHoLaxGGSF0CVWDFIGcPNu/L3ggJhJa6jp45qgJmM7PCfOq/vQem0Q ESVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=N+5b28BJJxQmQyCpULMzJ49e2Q4m0GQ5fyZzjgtDMp4=; b=qt0w8jmYfH7nB4QuJ2R+ms7JXpGGXtSVpAfTVtkeD2ZA/n/p1SKiMNe1GGP1oyHZy4 MZ3C3dPkxcAEziJeM73ng3tCjjlFVpkdZWMWE4tScxOJzdXDPCPfSkGdZLEIPPoOpWxV sV/FixhvsP1OIhnkPHPLsQYveTgI5fOzUbIpAav8DCfusur09biZSkzlmdViilsggkLD vldY2PnGSmzfxerA72rxBozxnT3euTqWdm177pJpBgXFecFDXnGVpzTVQzJd+UK2L1M0 opfe1D9R3FDf6ftpXL+AaR/AFPd5WvHI+5BdARHdz39MoA+SSc3kReeBuTd+iaVT5SnV tvVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bQ1MgE2y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v10-20020a63610a000000b00478e15039c0si2111036pgb.779.2022.12.06.08.02.26; Tue, 06 Dec 2022 08:02:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bQ1MgE2y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235088AbiLFQAJ (ORCPT + 99 others); Tue, 6 Dec 2022 11:00:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231363AbiLFP77 (ORCPT ); Tue, 6 Dec 2022 10:59:59 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13AA1248E1 for ; Tue, 6 Dec 2022 07:59:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342339; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N+5b28BJJxQmQyCpULMzJ49e2Q4m0GQ5fyZzjgtDMp4=; b=bQ1MgE2yoWIZTKJivD7peQdHneQAuqYNLPxVCbKqX/asul5y1nfGan483/LM+GtqKYw3yW ePOdT5XISzOZk34YQEh2DWuyIELjWKiU06xFYpUt+bkGLSsfYNLr+NpN4VKGQBhLyCy76B aivBEMV/CFJMAyfFVplSe7rL+J8LNgQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-335-QttAg9naOxSusdgBcSpC2g-1; Tue, 06 Dec 2022 10:58:48 -0500 X-MC-Unique: QttAg9naOxSusdgBcSpC2g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 24E95800159; Tue, 6 Dec 2022 15:58:48 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7F2F940C6EC4; Tue, 6 Dec 2022 15:58:47 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 01/32] rxrpc: Fix missing unlock in rxrpc_do_sendmsg() From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:58:44 +0000 Message-ID: <167034232478.1105287.13584934117599532331.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481122011650059?= X-GMAIL-MSGID: =?utf-8?q?1751481122011650059?= One of the error paths in rxrpc_do_sendmsg() doesn't unlock the the call mutex before returning. Fix it to do this. Note that this still doesn't get rid of the checker warning: ../net/rxrpc/sendmsg.c:617:5: warning: context imbalance in 'rxrpc_do_sendmsg' - wrong count at exit I think the interplay between the socket lock and the call's user_mutex may be too complicated for checker to analyse, especially as rxrpc_new_client_call_for_sendmsg(), which it calls, returns with the call's user_mutex if successful but unconditionally drops the socket lock. Fixes: e754eba685aa ("rxrpc: Provide a cmsg to specify the amount of Tx data for a call") Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/sendmsg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 9fa7e37f7155..cde1e65f16b4 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -625,7 +625,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) if (call->tx_total_len != -1 || call->tx_pending || call->tx_top != 0) - goto error_put; + goto out_put_unlock; call->tx_total_len = p.call.tx_total_len; } } From patchwork Tue Dec 6 15:58:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30394 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904629wrr; Tue, 6 Dec 2022 08:03:26 -0800 (PST) X-Google-Smtp-Source: AA0mqf6xH6AW16Fsrmqk4sahZZKtu9IXgQscS9YdIx4rGlte0f9rnvTJK4vjba4uDaHsQet9G/up X-Received: by 2002:a05:6a00:1303:b0:561:7dc7:510b with SMTP id j3-20020a056a00130300b005617dc7510bmr91168838pfu.3.1670342606533; Tue, 06 Dec 2022 08:03:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342606; cv=none; d=google.com; s=arc-20160816; b=ou5YohgeAKeJ3NwS6uxnSinJ1dR2NHRSYyoomA1XGY1nFq40svEQhliBXwQSqBd/U8 LZAEL5Ktqh/3Q1o/AD3mlqGslXHkB6v9FZ5ZCRXrGOHmBicNswE4Xi98A6X6WOltlJSE Tm5mje6vjAGgnhZO5J0kN7jux+EfXsY22zAzpOZGmLqmkdZWANjvNA1FLhxO9VxH68iG OU9x1QX7ZT+vcQmqkzMA3aZWjUbmE1EBxD0oGBK4qT0fb/8br6NqhtSmMvgcF6qULEHc 59+4J0C1tnb+/Lf6+u9N/zsl4cpRw+j1tnB6aMd/uVNSc8EFteKWnDqhcsNRckzKwBrq tJ/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=xjAmjxe3yPO04heheB0Cnmxd8gGqiVxalfGtpsaO3Q0=; b=ynXh89hHHsDt3vyNTGeXfmHonaJtIXhz/jNIDH0npP8b6ECqDXLqEskyl3T2Y/GEFf YlIbFTEIUWYyKVc2Be/m2LM7DwGDX3bSGfWdjdkqJWXMfMhojIRWEgF9gaZUhRgmWlvV k0Gqn0mJra0TrCnI/x1UnpaZG6C2Y2encX44nuPaTB4BRRAf2U3Zw0+q7pN2RuoZOG/U 9NGn1Nr6jcLXnfsMyx+isT9d8Bs68V3DOuHMyduYrQNxTz7dc+COH4W8xjNPJSFY9o65 Z80CGV0wdYql32Mo/GVZFYt8m8qO5B/LNb/L12CfpopoO3V4+AkRHQBl10GEIsXFMA3a uNfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gHETcfvH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b16-20020a170903229000b0016efde92292si7825905plh.255.2022.12.06.08.02.51; Tue, 06 Dec 2022 08:03:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gHETcfvH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234618AbiLFP7z (ORCPT + 99 others); Tue, 6 Dec 2022 10:59:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231363AbiLFP7x (ORCPT ); Tue, 6 Dec 2022 10:59:53 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27B032494C for ; Tue, 6 Dec 2022 07:59:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342340; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xjAmjxe3yPO04heheB0Cnmxd8gGqiVxalfGtpsaO3Q0=; b=gHETcfvHQHU9kTD8A6W/Fiv/mWXeYgH5kbs1yBeMvcKcfJtUqNZ7WWe7Jg2dEjdanZOdyy N9NcqLcDCWXZra+jrInQ4sIPQurMY00TV4/smio3W2fHSHDE89Ciii0EdwptNjJBmzJClN OVf0/ZcA2aZBlAb5Ek6p/LdVyLoyoQc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-359-N1JghW5oMiKJM0y-bpVWgQ-1; Tue, 06 Dec 2022 10:58:57 -0500 X-MC-Unique: N1JghW5oMiKJM0y-bpVWgQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 96C1A3C02B76; Tue, 6 Dec 2022 15:58:56 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id F05481121315; Tue, 6 Dec 2022 15:58:55 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 02/32] rxrpc: Fix security setting propagation From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:58:53 +0000 Message-ID: <167034233329.1105287.18187783114467913357.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481169308793716?= X-GMAIL-MSGID: =?utf-8?q?1751481169308793716?= Fix the propagation of the security settings from sendmsg to the rxrpc_call struct. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/call_object.c | 1 + net/rxrpc/conn_client.c | 2 -- net/rxrpc/security.c | 6 +++--- 3 files changed, 4 insertions(+), 5 deletions(-) diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index be5eb8cdf549..89dcf60b1158 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -217,6 +217,7 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, call->tx_total_len = p->tx_total_len; call->key = key_get(cp->key); call->local = rxrpc_get_local(cp->local, rxrpc_local_get_call); + call->security_level = cp->security_level; if (p->kernel) __set_bit(RXRPC_CALL_KERNEL, &call->flags); if (cp->upgrade) diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index a08e33c9e54b..87efa0373aed 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -551,8 +551,6 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call); call->cid = conn->proto.cid | channel; call->call_id = call_id; - call->security = conn->security; - call->security_ix = conn->security_ix; call->dest_srx.srx_service = conn->service_id; trace_rxrpc_connect_call(call); diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c index 209f2c25a0da..ab968f65a490 100644 --- a/net/rxrpc/security.c +++ b/net/rxrpc/security.c @@ -67,13 +67,13 @@ const struct rxrpc_security *rxrpc_security_lookup(u8 security_index) */ int rxrpc_init_client_call_security(struct rxrpc_call *call) { - const struct rxrpc_security *sec; + const struct rxrpc_security *sec = &rxrpc_no_security; struct rxrpc_key_token *token; struct key *key = call->key; int ret; if (!key) - return 0; + goto found; ret = key_validate(key); if (ret < 0) @@ -88,7 +88,7 @@ int rxrpc_init_client_call_security(struct rxrpc_call *call) found: call->security = sec; - _leave(" = 0"); + call->security_ix = sec->security_index; return 0; } From patchwork Tue Dec 6 15:59:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30404 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2905739wrr; Tue, 6 Dec 2022 08:04:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf5Lv87AHYSw0rrbgVRbmGGQyCNYalrNGJuFpwmO7qjnNKBYKGlZOJmmwkFgfqWCSyqG7vSa X-Received: by 2002:a05:6512:ba7:b0:4b5:7cb4:b6e1 with SMTP id b39-20020a0565120ba700b004b57cb4b6e1mr2294756lfv.545.1670342688729; Tue, 06 Dec 2022 08:04:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342688; cv=none; d=google.com; s=arc-20160816; b=ibCWfMODwaHSyhZbtKW9HU34S9sS39NaPYVrCV4qGSLFnYoadBWsaeIHgQAngKaCpk VeuCVvhH8sOAaHZiUn05PSOgXiUBKEA9O7PXoNavHNnxZ7mGAheWfBHLIiNwbvfoKIUA CabRsC6QDExzk289y/OhURpn8qyLUFF71BA6Jd32HjxEdBgm/rPKN6dJDNAqmi2teDR5 nMepPSRK6NP+caKrpFvBzHiuZqufnmiSiI9VdLSZuZSvYBokxIeq1rjuqQSe+HObPJ2Y QjniDnPUgPs+dAvl0cHzVvsL2B3PZRroRrXR+qSaPd4WhmsdBzp0nlShiKxU9BhbX51E wQbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=nmigP3k6Qx9QEVAMxcTnG9wC9LGTkqBwXTGblbqUYvo=; b=TN5LHlTpShU6txu63lCd0ZTYS4WIUCFmXPwiEu1InkQjy8/TyXkDYufUwbB8ASy7Kv DeXthIcIO1PWNuk9Wd/jYd5Aow9heUzSYIba6ncUh8Mgn40Az75Jmhymhep+NYAT/f+J +MK7PENH16Fz7aabHmT0rQv+h0mHQA7Eh3x0DxkvlESDOkL3sZZRXeuTT1USBcEkRPTZ NLl2oBrfsqirMqfaLPsGXvL2Rixv4W0TJT2IsCaVbjiU1MbllnDTSi1MqC4q8CSiQ7du yEbD0bAyl9b85Vhw+QFrPeIDSj+qU87poiYwbuN6Zj1RzYizOsqkboX6kl7LeLmYzajx /ppw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=cVWO+Usq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cs11-20020a170906dc8b00b00791910ecd0fsi16573813ejc.540.2022.12.06.08.04.19; Tue, 06 Dec 2022 08:04:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=cVWO+Usq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234506AbiLFQBL (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235268AbiLFQAg (ORCPT ); Tue, 6 Dec 2022 11:00:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51D8625280 for ; Tue, 6 Dec 2022 07:59:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342349; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nmigP3k6Qx9QEVAMxcTnG9wC9LGTkqBwXTGblbqUYvo=; b=cVWO+UsqSRmvqj0ZsCFHiOWS9HKnvrUsMng+mYfINVNxMKI8QYFP/EuGDD536EJWqH5imb X1K+C3ryitYSiHi5G1rtzPkxk7VZnW+IeRwIu6d9r+IVXcZl6m/E8Kp/fWLZSaKhvvbuVW Wir4w27n3ot35HTEIFK6Yy3Y1cXCyc4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-311-uhWI_sMSOWaY62kUaHjUVw-1; Tue, 06 Dec 2022 10:59:05 -0500 X-MC-Unique: uhWI_sMSOWaY62kUaHjUVw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F14AB3C02B7C; Tue, 6 Dec 2022 15:59:04 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 589B6492B04; Tue, 6 Dec 2022 15:59:04 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 03/32] rxrpc: Simplify rxrpc_implicit_end_call() From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:01 +0000 Message-ID: <167034234176.1105287.11630154459420260928.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481254731579495?= X-GMAIL-MSGID: =?utf-8?q?1751481254731579495?= rxrpc_implicit_end_call() can be simplified a bit as it should call rxrpc_disconnect_call() now that it's being run in process context. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/input.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index d0e20e946e48..dbd92f09c2ca 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -998,8 +998,6 @@ void rxrpc_input_call_packet(struct rxrpc_call *call, struct sk_buff *skb) */ void rxrpc_implicit_end_call(struct rxrpc_call *call, struct sk_buff *skb) { - struct rxrpc_connection *conn = call->conn; - switch (READ_ONCE(call->state)) { case RXRPC_CALL_SERVER_AWAIT_ACK: rxrpc_call_completed(call); @@ -1014,8 +1012,5 @@ void rxrpc_implicit_end_call(struct rxrpc_call *call, struct sk_buff *skb) } rxrpc_input_call_event(call, skb); - - spin_lock(&conn->bundle->channel_lock); - __rxrpc_disconnect_call(conn, call); - spin_unlock(&conn->bundle->channel_lock); + rxrpc_disconnect_call(call); } From patchwork Tue Dec 6 15:59:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30396 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904785wrr; Tue, 6 Dec 2022 08:03:37 -0800 (PST) X-Google-Smtp-Source: AA0mqf4mAHBnfm38WpoqlQfHerMsEc7xoNwoDLlBBLKXyjf5DxOKGhz9Db2TlUZu6JJ0fCF6KQHb X-Received: by 2002:a17:906:8684:b0:7c1:a0e:77 with SMTP id g4-20020a170906868400b007c10a0e0077mr3457923ejx.29.1670342616919; Tue, 06 Dec 2022 08:03:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342616; cv=none; d=google.com; s=arc-20160816; b=z+rMCYsWWJD8xe5UN6Q0kvLbQaqh++2osYYeEuApC7yh/3y2iSwxQmzoUIYSF9loRb 1h/xzVwcmSJ9FzICRatYpQUm6A+qxxgwj27Xct0daugr2/vB1DE8tS7Nxj+yDEiHYnrZ MUXF25awSrolryUUHV7oJBS7LriuzroOSUCcxOXxhrU4ANDeAT5nKVIz2u1bLDQYIVj8 NdSnPajeHkNN4gsLFIbTayydfZXVUUtU73qK/lJTDhZX8gr6HCz2ij2sCtOtFCGgpvwU dS+qKxhrNX8yKYKxFGfQJB+BydQBSHLd3HoqLH41sXtuOuJ5T8jAvEN3xU1a4SxLMN4C HDEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=Tb8II3IfBWOJaRPjuSEsy6vQK7E6zutrKTmCKe8xsi4=; b=fGC9jefGqIQbVogNvcRkUC8/7+GGL/GNL1gD5ilt72XxmJ9bfVxAirCZc23eHBpuzE huoW6nTQnoRg6CpSFXNbsthpmM3EqtCXnwCf1dLu1Kl0c+e327nPPCHl2tqTKMQTn6OL P7KqDjURB2ja6jtuxg5wY8IuoGqz9Yr9vU/LINKMn7YTo5uB5fUJHkCRC9vaRtH8V1Fn Jo4ThellaT3HYQrikzmyZLQN+IgKFHXZa02IrBKJWykzn2mn4Uqp7jBsCCL61ARA2wUV 2KUdz2bGzoMYXizygLXH5N/tpb8ULPNSVzcLqeAi3UN1gQ66NZK4IM1EarVRAGFw8/Mo UAeg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ss3kkK46; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sa17-20020a1709076d1100b0077bd074d50bsi15476429ejc.105.2022.12.06.08.02.59; Tue, 06 Dec 2022 08:03:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ss3kkK46; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233613AbiLFQAh (ORCPT + 99 others); Tue, 6 Dec 2022 11:00:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235256AbiLFQA3 (ORCPT ); Tue, 6 Dec 2022 11:00:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEB8127CD8 for ; Tue, 6 Dec 2022 07:59:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342355; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Tb8II3IfBWOJaRPjuSEsy6vQK7E6zutrKTmCKe8xsi4=; b=Ss3kkK46GrGwv5Ie+qobEWGPtj2dmJ58hn8Uh5G/BzjMvNUJWH7ceeS5/pMnBigg1UqMRZ qS5cWu70MoT0MmXGAN0Dp/tXwx5mxr8TqHgs/aLudnCyDVFNfbFyvJHKMZainmC7YM5k8h xRfwm/nHxysat5NZMrfIV0Gxfed/hzY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-484-m9oB5-j9PUKEw5kzS0zV5A-1; Tue, 06 Dec 2022 10:59:11 -0500 X-MC-Unique: m9oB5-j9PUKEw5kzS0zV5A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 941273C0CD4A; Tue, 6 Dec 2022 15:59:11 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id D5F5A40C2064; Tue, 6 Dec 2022 15:59:10 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 04/32] rxrpc: Separate call retransmission from other conn events From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:10 +0000 Message-ID: <167034235013.1105287.17719162958503710373.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481179678774368?= X-GMAIL-MSGID: =?utf-8?q?1751481179678774368?= Call the rxrpc_conn_retransmit_call() directly from rxrpc_input_packet() rather than calling it via connection event handling. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 2 ++ net/rxrpc/conn_event.c | 34 +++++----------------------------- net/rxrpc/io_thread.c | 2 +- 3 files changed, 8 insertions(+), 30 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index e7dccab7b741..cb227b85aae1 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -895,6 +895,8 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *); /* * conn_event.c */ +void rxrpc_conn_retransmit_call(struct rxrpc_connection *, struct sk_buff *, + unsigned int); void rxrpc_process_connection(struct work_struct *); void rxrpc_process_delayed_final_acks(struct rxrpc_connection *, bool); int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb); diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 480364bcbf85..dfd29882126f 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -19,9 +19,9 @@ /* * Retransmit terminal ACK or ABORT of the previous call. */ -static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, - struct sk_buff *skb, - unsigned int channel) +void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, + struct sk_buff *skb, + unsigned int channel) { struct rxrpc_skb_priv *sp = skb ? rxrpc_skb(skb) : NULL; struct rxrpc_channel *chan; @@ -292,24 +292,6 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, _enter("{%d},{%u,%%%u},", conn->debug_id, sp->hdr.type, sp->hdr.serial); switch (sp->hdr.type) { - case RXRPC_PACKET_TYPE_DATA: - case RXRPC_PACKET_TYPE_ACK: - rxrpc_conn_retransmit_call(conn, skb, - sp->hdr.cid & RXRPC_CHANNELMASK); - return 0; - - case RXRPC_PACKET_TYPE_BUSY: - /* Just ignore BUSY packets for now. */ - return 0; - - case RXRPC_PACKET_TYPE_ABORT: - conn->error = -ECONNABORTED; - conn->abort_code = skb->priority; - conn->state = RXRPC_CONN_REMOTELY_ABORTED; - set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); - rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial); - return -ECONNABORTED; - case RXRPC_PACKET_TYPE_CHALLENGE: return conn->security->respond_to_challenge(conn, skb, _abort_code); @@ -504,18 +486,12 @@ int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { _leave(" = -ECONNABORTED [%u]", conn->state); - return -ECONNABORTED; + return 0; } _enter("{%d},{%u,%%%u},", conn->debug_id, sp->hdr.type, sp->hdr.serial); switch (sp->hdr.type) { - case RXRPC_PACKET_TYPE_DATA: - case RXRPC_PACKET_TYPE_ACK: - rxrpc_conn_retransmit_call(conn, skb, - sp->hdr.cid & RXRPC_CHANNELMASK); - return 0; - case RXRPC_PACKET_TYPE_BUSY: /* Just ignore BUSY packets for now. */ return 0; @@ -526,7 +502,7 @@ int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) conn->state = RXRPC_CONN_REMOTELY_ABORTED; set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial); - return -ECONNABORTED; + return 0; case RXRPC_PACKET_TYPE_CHALLENGE: case RXRPC_PACKET_TYPE_RESPONSE: diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index d83ae3193032..c0751ff3712b 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -358,7 +358,7 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, sp->hdr.seq, sp->hdr.serial, sp->hdr.flags); - rxrpc_input_conn_packet(conn, skb); + rxrpc_conn_retransmit_call(conn, skb, channel); return 0; } From patchwork Tue Dec 6 15:59:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30395 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904672wrr; Tue, 6 Dec 2022 08:03:29 -0800 (PST) X-Google-Smtp-Source: AA0mqf6m2CkZwoRXeariMYVJiP+o3bu2rFWc3AOALIT+S6U3BPz5WdarFTSFeKVK886/8ffstbIP X-Received: by 2002:a63:f962:0:b0:477:1bb8:bbf4 with SMTP id q34-20020a63f962000000b004771bb8bbf4mr61314024pgk.19.1670342609595; Tue, 06 Dec 2022 08:03:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342609; cv=none; d=google.com; s=arc-20160816; b=LqnyBkjFiEXZNWLq2x3g8corigZ7HvjSehkFoxHKnvKmo0uIS37sP0cYgHwLxSxG1G PPagKvc6gmXpi4sInUYUJtDVjCmEEnH6x7rWnxNk68uf7F6K1gxtzIDMmJ0A2QXaTcYc xuA0UHBI+IjGhEuzRaeSJYXvE7EZ7X7uKoba2oBx+Fx2X3wsBmGw0X/cpAqJyrn1ZwMs gJTwCzWoNYxNp2e1XEvExNYeack5qGYj9Wu1lJ3uWw3Tft1tfA01wyvSY52A3ELSfJPD aDJfGE5GD4+ocJl1rQST8GrIXvm+zkxUn16qWo82t4/vYhkaLWp6c5iy3UjzswyHO8Uo rhvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=+Hd4cCYEIPilIoRtMeZhgzlLe8VXGoDQwL42cMiEFwY=; b=UBUg5EAaspfki+DHEIFOiHsXXguMCXg+esI09Xc6uEAJL6Sc0awiRC7gJVz4/d/U/r Yv/1wVfjEps2ASngGqRZRA+ntASzkA7smUUY9F/LqriV2pw/SZ4FFN9a0j7Yn0+U33tS VXmKuENRYsNfWWISQNQhxr6vWyg/3zM5ynLb2PqOe5S5nvyZNDvenqPUtEDRcZoc2FaA QeEsU9dFnj4uVAq+/Lo8Bdn+81ssz77tsMaB4EoM2SnexKYjZPB8QsbVsHSTjPumNvyM YzOooIDdkSkMhwEn3SUBNhusD/X64souzq5eJf0N6GvTZi+XRP03yYr8W2zuPo5n1Fj9 2dUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Bv2u+wiZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s12-20020a170903200c00b001843b3789cbsi15446614pla.623.2022.12.06.08.03.04; Tue, 06 Dec 2022 08:03:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Bv2u+wiZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234940AbiLFQBP (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234435AbiLFQAm (ORCPT ); Tue, 6 Dec 2022 11:00:42 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84BC727FFB for ; Tue, 6 Dec 2022 07:59:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342361; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+Hd4cCYEIPilIoRtMeZhgzlLe8VXGoDQwL42cMiEFwY=; b=Bv2u+wiZAAg2SDLmzpiY58cfcc9VcI1KgDuRwyvrALPqGTHlPSG8agC3cNcI+b7VJTkA6h SlrHjIHJA6ByJ4zD4fVVDqcdBVuxVAOSMgjDoa0rZZnPcXAEWCSQLwKR0+Efst+7M352lj DIxD66QSFeFZltPoQw+iuC+7FpcW790= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-201-hFzxQ_nUOLOOAvHtR2oHKg-1; Tue, 06 Dec 2022 10:59:20 -0500 X-MC-Unique: hFzxQ_nUOLOOAvHtR2oHKg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2B2E8282381A; Tue, 6 Dec 2022 15:59:20 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6CB20492B04; Tue, 6 Dec 2022 15:59:19 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 05/32] rxrpc: Convert call->recvmsg_lock to a spinlock From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:16 +0000 Message-ID: <167034235675.1105287.3642482133403716968.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481172197655827?= X-GMAIL-MSGID: =?utf-8?q?1751481172197655827?= Convert call->recvmsg_lock to a spinlock as it's only ever write-locked. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/af_rxrpc.c | 2 +- net/rxrpc/ar-internal.h | 2 +- net/rxrpc/call_object.c | 4 ++-- net/rxrpc/recvmsg.c | 12 ++++++------ 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 7ea576f6ba4b..b9dadc89fede 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -783,7 +783,7 @@ static int rxrpc_create(struct net *net, struct socket *sock, int protocol, INIT_LIST_HEAD(&rx->sock_calls); INIT_LIST_HEAD(&rx->to_be_accepted); INIT_LIST_HEAD(&rx->recvmsg_q); - rwlock_init(&rx->recvmsg_lock); + spin_lock_init(&rx->recvmsg_lock); rwlock_init(&rx->call_lock); memset(&rx->srx, 0, sizeof(rx->srx)); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index cb227b85aae1..423f2e1eddb3 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -154,7 +154,7 @@ struct rxrpc_sock { struct list_head sock_calls; /* List of calls owned by this socket */ struct list_head to_be_accepted; /* calls awaiting acceptance */ struct list_head recvmsg_q; /* Calls awaiting recvmsg's attention */ - rwlock_t recvmsg_lock; /* Lock for recvmsg_q */ + spinlock_t recvmsg_lock; /* Lock for recvmsg_q */ struct key *key; /* security for this socket */ struct key *securities; /* list of server security descriptors */ struct rb_root calls; /* User ID -> call mapping */ diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 89dcf60b1158..36cc868b8922 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -538,7 +538,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) del_timer_sync(&call->timer); /* Make sure we don't get any more notifications */ - write_lock(&rx->recvmsg_lock); + spin_lock(&rx->recvmsg_lock); if (!list_empty(&call->recvmsg_link)) { _debug("unlinking once-pending call %p { e=%lx f=%lx }", @@ -551,7 +551,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) call->recvmsg_link.next = NULL; call->recvmsg_link.prev = NULL; - write_unlock(&rx->recvmsg_lock); + spin_unlock(&rx->recvmsg_lock); if (put) rxrpc_put_call(call, rxrpc_call_put_unnotify); diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 36b25d003cf0..0cde2b477711 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -40,12 +40,12 @@ void rxrpc_notify_socket(struct rxrpc_call *call) call->notify_rx(sk, call, call->user_call_ID); spin_unlock(&call->notify_lock); } else { - write_lock(&rx->recvmsg_lock); + spin_lock(&rx->recvmsg_lock); if (list_empty(&call->recvmsg_link)) { rxrpc_get_call(call, rxrpc_call_get_notify_socket); list_add_tail(&call->recvmsg_link, &rx->recvmsg_q); } - write_unlock(&rx->recvmsg_lock); + spin_unlock(&rx->recvmsg_lock); if (!sock_flag(sk, SOCK_DEAD)) { _debug("call %ps", sk->sk_data_ready); @@ -441,14 +441,14 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, /* Find the next call and dequeue it if we're not just peeking. If we * do dequeue it, that comes with a ref that we will need to release. */ - write_lock(&rx->recvmsg_lock); + spin_lock(&rx->recvmsg_lock); l = rx->recvmsg_q.next; call = list_entry(l, struct rxrpc_call, recvmsg_link); if (!(flags & MSG_PEEK)) list_del_init(&call->recvmsg_link); else rxrpc_get_call(call, rxrpc_call_get_recvmsg); - write_unlock(&rx->recvmsg_lock); + spin_unlock(&rx->recvmsg_lock); trace_rxrpc_recvmsg(call, rxrpc_recvmsg_dequeue, 0); @@ -536,9 +536,9 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, error_requeue_call: if (!(flags & MSG_PEEK)) { - write_lock(&rx->recvmsg_lock); + spin_lock(&rx->recvmsg_lock); list_add(&call->recvmsg_link, &rx->recvmsg_q); - write_unlock(&rx->recvmsg_lock); + spin_unlock(&rx->recvmsg_lock); trace_rxrpc_recvmsg(call, rxrpc_recvmsg_requeue, 0); } else { rxrpc_put_call(call, rxrpc_call_put_recvmsg); From patchwork Tue Dec 6 15:59:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30397 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904783wrr; Tue, 6 Dec 2022 08:03:37 -0800 (PST) X-Google-Smtp-Source: AA0mqf7CAlrnelncla5v5IK1hSJeA/CyhsA5bnqE6gWAFaU3isW8g/10gPOKdZfutX7pnK/XF39V X-Received: by 2002:a17:906:240f:b0:7c0:f7b0:fbbb with SMTP id z15-20020a170906240f00b007c0f7b0fbbbmr7688888eja.266.1670342616929; Tue, 06 Dec 2022 08:03:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342616; cv=none; d=google.com; s=arc-20160816; b=m6ztz9Hq5o3325XAQhrrZFBzqAzNPgmoyMAqEu3HKECVMDvYyjbi++Pl200F5aOa0m Jfj2qJiX/Zo9Yrnm434hk+A7c03q6unP1pmVgihx2n5JvogGFg4OSf/25VGtSrfRIBq7 vQmMLNHy6qa4DvmJgvMornUNbN1KpMU/s3ej5JYdi+CHGWkxNsyaUtS4KQyVzGDZNVXw ZCNdcU49f2ro+uQIrwWDH2MHI39LrwCCCEDx5fU0W52Ti1R3NGIHJH+Th2hIV/Zx+UmI 7ZYdTnxVQBnVg5M9KYkybUBk+4vtOUxJJMGp/lAz3pKiAJtNqxpNxoMIidTIKecBvV+6 SsrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=el1eJH3pKHyw7YaqmJGn3ZCvlJHOAyoKsmcdLN1455w=; b=thEfe1048Ay+CU7A1000nmYUc3E2z5AXaPWhXU0vtTmII4AVGIazoGk0CMvBAasY94 DULhh7BDHOXiTkJKkzuR75XLDyfAH55wHZO+4oym7RHivwpsiZhZwmIrfGLqQT//kBRc OB9vme/cPdC1Mey2KIf1UEqAeYxyUTGYX+HF/yFGyWUdaHTYURWjS1pOYLzep0TYn0UT BN+8Xednhjjicc+sWBLAgEOAkjQvMwuVgX3Aylh2hte6ARyqta+zCMr23uaissQ02cbo S++zyiLTO+5xJbiw/INZxCMko+xSm9h9c2DCU5G27LupUI72Mqvl9WxqE7M0CXPjzMFt f/4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Un1ERW+x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v12-20020aa7d64c000000b00461d2ed78acsi2019278edr.378.2022.12.06.08.03.07; Tue, 06 Dec 2022 08:03:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Un1ERW+x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235095AbiLFQBT (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235081AbiLFQAx (ORCPT ); Tue, 6 Dec 2022 11:00:53 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DBA7286F3 for ; Tue, 6 Dec 2022 07:59:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342373; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=el1eJH3pKHyw7YaqmJGn3ZCvlJHOAyoKsmcdLN1455w=; b=Un1ERW+xq9mVApYEQMaQb4IuMr7pLVgo7EP/P6xOgAv7gfuHSPAeXpnDNlz0LrWQjL7Log vil6haGZ42An1Pg0Wdl83M1EiR3B1GKKCgY2bHHYihi9eeI2x8TXiMKcBNgcgKogzC4T/L vlHdq2IKyPaN/K2iFAsWICkxEMv6RoI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-593-9_jGAE6tMfqgPdn8KrQvTg-1; Tue, 06 Dec 2022 10:59:29 -0500 X-MC-Unique: 9_jGAE6tMfqgPdn8KrQvTg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B417D858F13; Tue, 6 Dec 2022 15:59:28 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0067F140EBF5; Tue, 6 Dec 2022 15:59:27 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 06/32] rxrpc: Convert call->state_lock to a spinlock From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:25 +0000 Message-ID: <167034236532.1105287.7206596176961982004.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481179855977895?= X-GMAIL-MSGID: =?utf-8?q?1751481179855977895?= Convert call->state_lock to a spinlock and use a barrier on it when setting the completion state. The only readers can then be made to read it locklessly. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 2 +- net/rxrpc/call_object.c | 2 +- net/rxrpc/conn_client.c | 4 ++-- net/rxrpc/conn_event.c | 4 ++-- net/rxrpc/input.c | 6 +++--- net/rxrpc/recvmsg.c | 23 ++++++++++++----------- net/rxrpc/sendmsg.c | 17 ++++++----------- 7 files changed, 27 insertions(+), 31 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 423f2e1eddb3..755395d1f2ca 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -622,7 +622,7 @@ struct rxrpc_call { unsigned long flags; unsigned long events; spinlock_t notify_lock; /* Kernel notification lock */ - rwlock_t state_lock; /* lock for state transition */ + spinlock_t state_lock; /* lock for state transition */ u32 abort_code; /* Local/remote abort code */ int error; /* Local error incurred */ enum rxrpc_call_state state; /* current state of call */ diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 36cc868b8922..07abf12e99bb 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -162,7 +162,7 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, init_waitqueue_head(&call->waitq); spin_lock_init(&call->notify_lock); spin_lock_init(&call->tx_lock); - rwlock_init(&call->state_lock); + spin_lock_init(&call->state_lock); refcount_set(&call->ref, 1); call->debug_id = debug_id; call->tx_total_len = -1; diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 87efa0373aed..ec8913de42c9 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -555,9 +555,9 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, trace_rxrpc_connect_call(call); - write_lock(&call->state_lock); + spin_lock(&call->state_lock); call->state = RXRPC_CALL_CLIENT_SEND_REQUEST; - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); /* Paired with the read barrier in rxrpc_connect_call(). This orders * cid and epoch in the connection wrt to call_id without the need to diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index dfd29882126f..f05d58636307 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -265,12 +265,12 @@ static void rxrpc_call_is_secure(struct rxrpc_call *call) { _enter("%p", call); if (call) { - write_lock(&call->state_lock); + spin_lock(&call->state_lock); if (call->state == RXRPC_CALL_SERVER_SECURING) { call->state = RXRPC_CALL_SERVER_RECV_REQUEST; rxrpc_notify_socket(call); } - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); } } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index dbd92f09c2ca..3b2e8e7d2e0f 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -257,7 +257,7 @@ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, ASSERT(test_bit(RXRPC_CALL_TX_LAST, &call->flags)); - write_lock(&call->state_lock); + spin_lock(&call->state_lock); state = call->state; switch (state) { @@ -278,7 +278,7 @@ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, goto bad_state; } - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); if (state == RXRPC_CALL_CLIENT_AWAIT_REPLY) trace_rxrpc_txqueue(call, rxrpc_txqueue_await_reply); else @@ -287,7 +287,7 @@ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, return true; bad_state: - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); kdebug("end_tx %s", rxrpc_call_states[call->state]); rxrpc_proto_abort(abort_why, call, call->tx_top); return false; diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 0cde2b477711..a9c9b2a8a27a 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -70,7 +70,8 @@ bool __rxrpc_set_call_completion(struct rxrpc_call *call, call->abort_code = abort_code; call->error = error; call->completion = compl; - call->state = RXRPC_CALL_COMPLETE; + /* Allow reader of completion state to operate locklessly */ + smp_store_release(&call->state, RXRPC_CALL_COMPLETE); trace_rxrpc_call_complete(call); wake_up(&call->waitq); rxrpc_notify_socket(call); @@ -87,9 +88,9 @@ bool rxrpc_set_call_completion(struct rxrpc_call *call, bool ret = false; if (call->state < RXRPC_CALL_COMPLETE) { - write_lock(&call->state_lock); + spin_lock(&call->state_lock); ret = __rxrpc_set_call_completion(call, compl, abort_code, error); - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); } return ret; } @@ -107,9 +108,9 @@ bool rxrpc_call_completed(struct rxrpc_call *call) bool ret = false; if (call->state < RXRPC_CALL_COMPLETE) { - write_lock(&call->state_lock); + spin_lock(&call->state_lock); ret = __rxrpc_call_completed(call); - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); } return ret; } @@ -131,9 +132,9 @@ bool rxrpc_abort_call(const char *why, struct rxrpc_call *call, { bool ret; - write_lock(&call->state_lock); + spin_lock(&call->state_lock); ret = __rxrpc_abort_call(why, call, seq, abort_code, error); - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); return ret; } @@ -193,23 +194,23 @@ static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) if (call->state == RXRPC_CALL_CLIENT_RECV_REPLY) rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_terminal_ack); - write_lock(&call->state_lock); + spin_lock(&call->state_lock); switch (call->state) { case RXRPC_CALL_CLIENT_RECV_REPLY: __rxrpc_call_completed(call); - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); break; case RXRPC_CALL_SERVER_RECV_REQUEST: call->state = RXRPC_CALL_SERVER_ACK_REQUEST; call->expect_req_by = jiffies + MAX_JIFFY_OFFSET; - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_processing_op); break; default: - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); break; } } diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index cde1e65f16b4..816c1b083a69 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -195,7 +195,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, if (last || call->state == RXRPC_CALL_SERVER_ACK_REQUEST) { _debug("________awaiting reply/ACK__________"); - write_lock(&call->state_lock); + spin_lock(&call->state_lock); switch (call->state) { case RXRPC_CALL_CLIENT_SEND_REQUEST: call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY; @@ -218,7 +218,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, default: break; } - write_unlock(&call->state_lock); + spin_unlock(&call->state_lock); } if (poke) @@ -354,12 +354,9 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, success: ret = copied; - if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) { - read_lock(&call->state_lock); - if (call->error < 0) - ret = call->error; - read_unlock(&call->state_lock); - } + if (smp_load_acquire(&call->state) == RXRPC_CALL_COMPLETE && + call->error < 0) + ret = call->error; out: call->tx_pending = txb; _leave(" = %d", ret); @@ -715,7 +712,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, _debug("CALL %d USR %lx ST %d on CONN %p", call->debug_id, call->user_call_ID, call->state, call->conn); - switch (READ_ONCE(call->state)) { + switch (smp_load_acquire(&call->state)) { case RXRPC_CALL_CLIENT_SEND_REQUEST: case RXRPC_CALL_SERVER_ACK_REQUEST: case RXRPC_CALL_SERVER_SEND_REPLY: @@ -723,9 +720,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, notify_end_tx, &dropped_lock); break; case RXRPC_CALL_COMPLETE: - read_lock(&call->state_lock); ret = call->error; - read_unlock(&call->state_lock); break; default: /* Request phase complete for this client call */ From patchwork Tue Dec 6 15:59:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30399 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904883wrr; Tue, 6 Dec 2022 08:03:45 -0800 (PST) X-Google-Smtp-Source: AA0mqf5JdCAHldqIOLPK6BDXaHS4KduVGTfhSHz/s/EGNn7KB+5k6EqB3uCfSCjD6W7WQYzTG4Zl X-Received: by 2002:a17:907:d60b:b0:7bc:dbd8:184b with SMTP id wd11-20020a170907d60b00b007bcdbd8184bmr43588211ejc.110.1670342624967; Tue, 06 Dec 2022 08:03:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342624; cv=none; d=google.com; s=arc-20160816; b=sZnn+lK7TRi0cBeVpaLXRbJfD/NlaDObECYtuRZUiIcFPh8eqr/0El3w1Uhj5U2gym v3xnac/PrsKAMCgIPElee8LgAxrhDh+soafK9YOJBkl98wzLWUyXQwYRUqITQKSUf5J0 /ylTFuPromDoYGVnJRlyKDIUrCf7JQ07+Exx5QfVfxPah3YaWTVmRbCKXEnSJpdDuhJI rJZAFPj6+FMZX+thy7I59e9DQ8PatzTPCmJN2RID1OS+sz2d+cKQgYLTiWvTM1JjUBdH mJk8T44TMRmrVQGBRH9hYeMFpjUUHs9H51/prflWyMIVGVL+DLPbhxvv5Gsq6cwOQnTh PfOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=Lkli6dmEQ4Mcb8Om1fjjKp1TiUH43Z+p4yOJ67WTCvY=; b=aLjf8MSV2gvCAR8iOMPgXMTsJBsJK5qoVfjg1jiMsmZKwAthzuyrL7799wBRtjYcU9 0L/f6JnykgMhi6vPEDBrFzlb0Pl5gAE1h/omYJBVy/BRN0X8USfxgPUpanaC+MzR7Rsu b/xgRnjhcnJCnDp0tI2p+ePdfVmbHP0jEY5YYpp2HRmIrl4uY5cdu34FslWUTsADWNhT IdWGdVvFynqrVGM5S4QcbjnCYT5NvhI9LEfixtd326eIf1XX1zYdNxSouI/T51QEzsJo u1tZ9dRlUAUnLFO9eZ/QEyoZ3dhZOWAgv3Pniy2eQDYsmmU+t65ZQL0SXkWq1KHBmZuU R/sA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=a6weC7xE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g23-20020a170906349700b007ae86742c20si11273002ejb.282.2022.12.06.08.03.14; Tue, 06 Dec 2022 08:03:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=a6weC7xE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235134AbiLFQBW (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235114AbiLFQAy (ORCPT ); Tue, 6 Dec 2022 11:00:54 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEDCA2D1CB for ; Tue, 6 Dec 2022 07:59:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342379; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Lkli6dmEQ4Mcb8Om1fjjKp1TiUH43Z+p4yOJ67WTCvY=; b=a6weC7xEuZWhfH37K10tLZiZtQHucL6892FGelHB2wlWMjGCfW32tMOaFM5oqq+XkXZuf3 uCG3PL6CPGidrf+DhLTbee8QM3GGougPcYAFYaJTblUq2qJRyWDgv6RgFLixxae9y6GXRy 7KIV9pmymVII9eDPHv4E6I+QqTDzkdk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-116-4FAFcoGZOeK3Vw0KL80jMg-1; Tue, 06 Dec 2022 10:59:37 -0500 X-MC-Unique: 4FAFcoGZOeK3Vw0KL80jMg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8D197811E7A; Tue, 6 Dec 2022 15:59:37 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE3651759E; Tue, 6 Dec 2022 15:59:36 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 07/32] rxrpc: Only set/transmit aborts in the I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:33 +0000 Message-ID: <167034237388.1105287.8478228096444646375.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481188257086398?= X-GMAIL-MSGID: =?utf-8?q?1751481188257086398?= Only set the abort call completion state in the I/O thread and only transmit ABORT packets from there. rxrpc_abort_call() can then be made to actually send the packet. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 1 + net/rxrpc/ar-internal.h | 6 +++++- net/rxrpc/call_event.c | 7 ++++++- net/rxrpc/call_object.c | 5 ++--- net/rxrpc/input.c | 6 ++---- net/rxrpc/recvmsg.c | 2 ++ net/rxrpc/sendmsg.c | 28 +++++++++++++++++++++------- 7 files changed, 39 insertions(+), 16 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 049b52e7aa6a..72f020b829e0 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -17,6 +17,7 @@ * Declare tracing information enums and their string mappings for display. */ #define rxrpc_call_poke_traces \ + EM(rxrpc_call_poke_abort, "Abort") \ EM(rxrpc_call_poke_error, "Error") \ EM(rxrpc_call_poke_idle, "Idle") \ EM(rxrpc_call_poke_start, "Start") \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 755395d1f2ca..ca99e94318fc 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -623,7 +623,10 @@ struct rxrpc_call { unsigned long events; spinlock_t notify_lock; /* Kernel notification lock */ spinlock_t state_lock; /* lock for state transition */ - u32 abort_code; /* Local/remote abort code */ + const char *send_abort_why; /* String indicating why the abort was sent */ + s32 send_abort; /* Abort code to be sent */ + short send_abort_err; /* Error to be associated with the abort */ + s32 abort_code; /* Local/remote abort code */ int error; /* Local error incurred */ enum rxrpc_call_state state; /* current state of call */ enum rxrpc_call_completion completion; /* Call completion condition */ @@ -1146,6 +1149,7 @@ struct key *rxrpc_look_up_server_security(struct rxrpc_connection *, /* * sendmsg.c */ +bool rxrpc_propose_abort(struct rxrpc_call *, u32, int, const char *); int rxrpc_do_sendmsg(struct rxrpc_sock *, struct msghdr *, size_t); /* diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index b2cf448fb02c..a8b5dff09999 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -336,6 +336,7 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) unsigned long now, next, t; rxrpc_serial_t ackr_serial; bool resend = false, expired = false; + s32 abort_code; rxrpc_see_call(call, rxrpc_call_see_input); @@ -346,6 +347,11 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) if (call->state == RXRPC_CALL_COMPLETE) goto out; + abort_code = smp_load_acquire(&call->send_abort); + if (abort_code) + rxrpc_abort_call(call->send_abort_why, call, 0, call->send_abort, + call->send_abort_err); + if (skb && skb->mark == RXRPC_SKB_MARK_ERROR) goto out; @@ -433,7 +439,6 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) } else { rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME); } - rxrpc_send_abort_packet(call); goto out; } diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 07abf12e99bb..f7606cdf4209 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -588,7 +588,7 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) call = list_entry(rx->to_be_accepted.next, struct rxrpc_call, accept_link); list_del(&call->accept_link); - rxrpc_abort_call("SKR", call, 0, RX_CALL_DEAD, -ECONNRESET); + rxrpc_propose_abort(call, RX_CALL_DEAD, -ECONNRESET, "SKR"); rxrpc_put_call(call, rxrpc_call_put_release_sock_tba); } @@ -596,8 +596,7 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) call = list_entry(rx->sock_calls.next, struct rxrpc_call, sock_link); rxrpc_get_call(call, rxrpc_call_get_release_sock); - rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, -ECONNRESET); - rxrpc_send_abort_packet(call); + rxrpc_propose_abort(call, RX_CALL_DEAD, -ECONNRESET, "SKT"); rxrpc_release_call(rx, call); rxrpc_put_call(call, rxrpc_call_put_release_sock); } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 3b2e8e7d2e0f..58077b5bfb68 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -12,8 +12,7 @@ static void rxrpc_proto_abort(const char *why, struct rxrpc_call *call, rxrpc_seq_t seq) { - if (rxrpc_abort_call(why, call, seq, RX_PROTOCOL_ERROR, -EBADMSG)) - rxrpc_send_abort_packet(call); + rxrpc_abort_call(why, call, seq, RX_PROTOCOL_ERROR, -EBADMSG); } /* @@ -1005,8 +1004,7 @@ void rxrpc_implicit_end_call(struct rxrpc_call *call, struct sk_buff *skb) case RXRPC_CALL_COMPLETE: break; default: - if (rxrpc_abort_call("IMP", call, 0, RX_CALL_DEAD, -ESHUTDOWN)) - rxrpc_send_abort_packet(call); + rxrpc_abort_call("IMP", call, 0, RX_CALL_DEAD, -ESHUTDOWN); trace_rxrpc_improper_term(call); break; } diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index a9c9b2a8a27a..492608973935 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -135,6 +135,8 @@ bool rxrpc_abort_call(const char *why, struct rxrpc_call *call, spin_lock(&call->state_lock); ret = __rxrpc_abort_call(why, call, seq, abort_code, error); spin_unlock(&call->state_lock); + if (ret) + rxrpc_send_abort_packet(call); return ret; } diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 816c1b083a69..c8cb3b76633c 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -17,6 +17,25 @@ #include #include "ar-internal.h" +/* + * Propose an abort to be made in the I/O thread. + */ +bool rxrpc_propose_abort(struct rxrpc_call *call, + u32 abort_code, int error, const char *why) +{ + _enter("{%d},%d,%d,%s", call->debug_id, abort_code, error, why); + + if (!call->send_abort && call->state < RXRPC_CALL_COMPLETE) { + call->send_abort_why = why; + call->send_abort_err = error; + smp_store_release(&call->send_abort, abort_code); + rxrpc_poke_call(call, rxrpc_call_poke_abort); + return true; + } + + return false; +} + /* * Return true if there's sufficient Tx queue space. */ @@ -660,9 +679,8 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) /* it's too late for this call */ ret = -ESHUTDOWN; } else if (p.command == RXRPC_CMD_SEND_ABORT) { + rxrpc_propose_abort(call, p.abort_code, -ECONNABORTED, "CMD"); ret = 0; - if (rxrpc_abort_call("CMD", call, 0, p.abort_code, -ECONNABORTED)) - ret = rxrpc_send_abort_packet(call); } else if (p.command != RXRPC_CMD_SEND_DATA) { ret = -EINVAL; } else { @@ -755,11 +773,7 @@ bool rxrpc_kernel_abort_call(struct socket *sock, struct rxrpc_call *call, _enter("{%d},%d,%d,%s", call->debug_id, abort_code, error, why); mutex_lock(&call->user_mutex); - - aborted = rxrpc_abort_call(why, call, 0, abort_code, error); - if (aborted) - rxrpc_send_abort_packet(call); - + aborted = rxrpc_propose_abort(call, abort_code, error, why); mutex_unlock(&call->user_mutex); return aborted; } From patchwork Tue Dec 6 15:59:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30398 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904878wrr; Tue, 6 Dec 2022 08:03:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf5v1nn/wr7kvkZmz0G9s7MzoZ08ubj8aaFyp2+4hrKGYr9xcWAfEw9jabAsJZXJAAVMnCmx X-Received: by 2002:a17:906:383:b0:7be:bf2f:f7f with SMTP id b3-20020a170906038300b007bebf2f0f7fmr36611370eja.270.1670342624620; Tue, 06 Dec 2022 08:03:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342624; cv=none; d=google.com; s=arc-20160816; b=lCUgO+WXxzW+7N09/dBAB+mLfUZIu5hT8TEr1CE2qI92pH9mMWOH4BHIYleA7eBL/7 9N9EDFSAoI2qXnZn0Jf68bVEeLHxR1STA+OEYcMnXqPbF+Nht85ywAE6HNNrlHLZZGN1 U5IskiZOoBsuVBqpVZWpHOJWDuXkNXvvBPyJOax0DLM10IbnaDoG9lAyqiagF73kuYzJ 0Xg0EsVNAYboUnIeEcz+lDQ0jyVn9fh9GYmv2p1lIlXKR/wY1dewEjSVih5jFGH00n0Q xiVolyJB0Shn1+qy9C6EPYSHzy82Jl1newWKJxvGv03rOUbkUTMPAaSBzNn3/4SHRQfh zptQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=V3R1kuEb365V8d9nhtT2KK7hcJTq9KxSkMNZo96Kb8g=; b=E7Sp62LBuH/IPMUyRvgTIkmFdJOydobcSxWiqTgozf0jPqzHvZM8MBtqh8TaCtk2zv l9KNz4y4rt5zh0+kWsjIKUDaliiumPozOTl9RJ78XpV0frDmsrAFbaiTwnC8giOYIN4J lbine9sHbNj88p+8Xenp3GscyLl9YWwfqXg6dxPuvVo1BVct3pGy5NDD3MKV+WGXkHOf /EhZ6OLNeq1NB8EpMB2S/o0Y4bz6nWFOO5u2qhSQ+xeBL1CKQXzUuu5M3+Lw0NtCkCls IKXUXoijUzVuXTB6lODpO8cZQQWpdjrjNkj0cLSbqIV7XHTQNc2ZyVf+1Q1rSZ+9DeeR t/Vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="i5b/Wo8C"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u11-20020aa7d0cb000000b00461bde34a12si1954684edo.627.2022.12.06.08.03.14; Tue, 06 Dec 2022 08:03:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="i5b/Wo8C"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235197AbiLFQB2 (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235173AbiLFQBD (ORCPT ); Tue, 6 Dec 2022 11:01:03 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BD362E6A9 for ; Tue, 6 Dec 2022 07:59:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342395; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V3R1kuEb365V8d9nhtT2KK7hcJTq9KxSkMNZo96Kb8g=; b=i5b/Wo8CNwF83OZX79szMH1xBByPoim0qAr7mRerC0RsXtCCMA9PTUV2+bwqpFU0vgA3Xi oRbaeASVY8aQtwPtsExgvp1c7PaR5xa8xxLuAUmbZ5x1DwtOvcHsvfhnG+cHj794l0811y FVTfg7uKtp2Die39LhP6aGubNfjzeeU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-7-EChUbyVpNSuY861Fcu7qdg-1; Tue, 06 Dec 2022 10:59:46 -0500 X-MC-Unique: EChUbyVpNSuY861Fcu7qdg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 280A41C008A6; Tue, 6 Dec 2022 15:59:46 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6A27840C2064; Tue, 6 Dec 2022 15:59:45 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 08/32] rxrpc: Only disconnect calls in the I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:42 +0000 Message-ID: <167034238272.1105287.7060192019565466718.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481187675127988?= X-GMAIL-MSGID: =?utf-8?q?1751481187675127988?= Only perform call disconnection in the I/O thread to reduce the locking requirement. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 1 + net/rxrpc/call_event.c | 14 +++++++++++++- net/rxrpc/call_object.c | 9 +-------- net/rxrpc/input.c | 1 - net/rxrpc/recvmsg.c | 1 + 5 files changed, 16 insertions(+), 10 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 72f020b829e0..49a0f799cdef 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -18,6 +18,7 @@ */ #define rxrpc_call_poke_traces \ EM(rxrpc_call_poke_abort, "Abort") \ + EM(rxrpc_call_poke_complete, "Compl") \ EM(rxrpc_call_poke_error, "Error") \ EM(rxrpc_call_poke_idle, "Idle") \ EM(rxrpc_call_poke_start, "Start") \ diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index a8b5dff09999..bf6858e69187 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -347,6 +347,13 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) if (call->state == RXRPC_CALL_COMPLETE) goto out; + if (!call->conn) { + printk("\n"); + printk("\n"); + kdebug("no conn %u", call->state); + printk("\n"); + } + abort_code = smp_load_acquire(&call->send_abort); if (abort_code) rxrpc_abort_call(call->send_abort_why, call, 0, call->send_abort, @@ -479,8 +486,13 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) } out: - if (call->state == RXRPC_CALL_COMPLETE) + if (call->state == RXRPC_CALL_COMPLETE) { del_timer_sync(&call->timer); + if (!test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) + rxrpc_disconnect_call(call); + if (call->security) + call->security->free_call_crypto(call); + } if (call->acks_hard_ack != call->tx_bottom) rxrpc_shrink_call_tx_buffer(call); _leave(""); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index f7606cdf4209..a9c77be9107a 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -50,7 +50,7 @@ void rxrpc_poke_call(struct rxrpc_call *call, enum rxrpc_call_poke_trace what) struct rxrpc_local *local = call->local; bool busy; - if (call->state < RXRPC_CALL_COMPLETE) { + if (!test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) { spin_lock_bh(&local->lock); busy = !list_empty(&call->attend_link); trace_rxrpc_poke_call(call, busy, what); @@ -529,13 +529,10 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), call->flags, rxrpc_call_see_release); - ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); - if (test_and_set_bit(RXRPC_CALL_RELEASED, &call->flags)) BUG(); rxrpc_put_call_slot(call); - del_timer_sync(&call->timer); /* Make sure we don't get any more notifications */ spin_lock(&rx->recvmsg_lock); @@ -568,10 +565,6 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) _debug("RELEASE CALL %p (%d CONN %p)", call, call->debug_id, conn); - if (conn && !test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) - rxrpc_disconnect_call(call); - if (call->security) - call->security->free_call_crypto(call); _leave(""); } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 58077b5bfb68..a72fd2f78fc0 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -1010,5 +1010,4 @@ void rxrpc_implicit_end_call(struct rxrpc_call *call, struct sk_buff *skb) } rxrpc_input_call_event(call, skb); - rxrpc_disconnect_call(call); } diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 492608973935..88298ab8a9d7 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -202,6 +202,7 @@ static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) case RXRPC_CALL_CLIENT_RECV_REPLY: __rxrpc_call_completed(call); spin_unlock(&call->state_lock); + rxrpc_poke_call(call, rxrpc_call_poke_complete); break; case RXRPC_CALL_SERVER_RECV_REQUEST: From patchwork Tue Dec 6 15:59:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30400 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2904914wrr; Tue, 6 Dec 2022 08:03:47 -0800 (PST) X-Google-Smtp-Source: AA0mqf7OptZTI+V0J+1xHyvs+s2SQ6yOgJoZYLkt9K3Yb9u/686XzB3nSMXa4G8V2pcPIZg/X0fA X-Received: by 2002:a62:c144:0:b0:577:6525:fc99 with SMTP id i65-20020a62c144000000b005776525fc99mr56015pfg.9.1670342626754; Tue, 06 Dec 2022 08:03:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342626; cv=none; d=google.com; s=arc-20160816; b=GV77SQrAw0JOM0PTpnp3GLghFP59xuNM9hwASU/4fv+4kSjcAs/apJR9BFbGV9Cr+3 UYlLpSG8wOn8Lbv1h/RqLeXsHjlc80H+4GGUrWH5bp4kntAFCLHggEt0rUylU9nYzdgt jilDscEYxsltGdwF+Q5r/cZrHn32Ni3/ILeD6rKvN/cn8cHgIEkNOWhabw2SHgc/N7m+ kLX5pBBl0FgjFCferitY+uGLbISzcSinlOTpJy8T+gWqN2Mb1blEmZl56p9Vg62Lt4nj uHiml7HwzDQi10GUaPi+5x14IGkX4xPY8a/V6nQx0PzX77eM4RRKLEQ4XxhlC34pU40s dRGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=h65fxFXgwEwBJ/pZOtw1EZM2S77bNXNr+HJdRpw+4N8=; b=GAQotAcVD8YH94xaAEChfz/geAER3b+GwyRnP3R6jbRnjGnm+36HUQL0GzNLyUl3ao gWww5y7ieeyhXBnOcHIpfGPhLjnj53IIzQ8wou1QW37aVEyOWY6tPJPRZnVQ6lCgfWfB iil0xWi5WjrB2W0yTSVOLnF6Mc2FwXcLpsLU7Gla55j3S6yLkoHqvy1iC4I6yxpOYDc/ AoUcnS4zZEqRcGCMVKvUK5hvD20Y+GlWAOQitvsx1t875wWivmYPc+bwGlJ2TBpeWbBW 4/idD1ItvyIjQRURZXy453CrPBUlOQxFXwHpy2sG6+rqCGtfgMpP8RY9Ec15u0qbPaGF eQAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ikzyt4yr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bc22-20020a656d96000000b00478700c2a6csi15714666pgb.47.2022.12.06.08.03.24; Tue, 06 Dec 2022 08:03:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ikzyt4yr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235239AbiLFQBe (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235187AbiLFQBF (ORCPT ); Tue, 6 Dec 2022 11:01:05 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 417C82D77D for ; Tue, 6 Dec 2022 07:59:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342396; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h65fxFXgwEwBJ/pZOtw1EZM2S77bNXNr+HJdRpw+4N8=; b=Ikzyt4yrqQR++zD4XMpoWC8h+BzjDgil2idJgUj/Xf6zylvaSVGrZOyaM8Nchr0M67GUcY /3xo7qWhgGgF4okBdqgHelb4iFNKnvJI4ip63iCwSsugYIeaLhoWA8HCfDNmUs1c11K6zM UE4uh/3OQL33/O6SdDZIOhaMxtxlk3U= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-189-JStW6hYKMn2u0sPvpORjhw-1; Tue, 06 Dec 2022 10:59:55 -0500 X-MC-Unique: JStW6hYKMn2u0sPvpORjhw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AC8618027ED; Tue, 6 Dec 2022 15:59:54 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id ECD194A9254; Tue, 6 Dec 2022 15:59:53 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 09/32] rxrpc: Allow a delay to be injected into packet reception From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:51 +0000 Message-ID: <167034239131.1105287.15054038294627892172.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481189777714267?= X-GMAIL-MSGID: =?utf-8?q?1751481189777714267?= If CONFIG_AF_RXRPC_DEBUG_RX_DELAY=y, then a delay is injected between packets and errors being received and them being made available to the processing code, thereby allowing the RTT to be artificially increased. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/Kconfig | 9 +++++++++ net/rxrpc/ar-internal.h | 6 ++++++ net/rxrpc/io_thread.c | 44 +++++++++++++++++++++++++++++++++++++++++++- net/rxrpc/local_object.c | 6 ++++++ net/rxrpc/misc.c | 7 +++++++ net/rxrpc/sysctl.c | 17 ++++++++++++++++- 6 files changed, 87 insertions(+), 2 deletions(-) diff --git a/net/rxrpc/Kconfig b/net/rxrpc/Kconfig index 7ae023b37a83..a20986806fea 100644 --- a/net/rxrpc/Kconfig +++ b/net/rxrpc/Kconfig @@ -36,6 +36,15 @@ config AF_RXRPC_INJECT_LOSS Say Y here to inject packet loss by discarding some received and some transmitted packets. +config AF_RXRPC_INJECT_RX_DELAY + bool "Inject delay into packet reception" + depends on SYSCTL + help + Say Y here to inject a delay into packet reception, allowing an + extended RTT time to be modelled. The delay can be configured using + /proc/sys/net/rxrpc/rxrpc_inject_rx_delay, setting a number of + milliseconds up to 0.5s (note that the granularity is actually in + jiffies). config AF_RXRPC_DEBUG bool "RxRPC dynamic debugging" diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index ca99e94318fc..ddba8048b7cb 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -289,6 +289,9 @@ struct rxrpc_local { struct task_struct *io_thread; struct rxrpc_sock __rcu *service; /* Service(s) listening on this endpoint */ struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */ +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + struct sk_buff_head rx_delay_queue; /* Delay injection queue */ +#endif struct sk_buff_head rx_queue; /* Received packets */ struct list_head call_attend_q; /* Calls requiring immediate attention */ struct rb_root client_bundles; /* Client connection bundles by socket params */ @@ -1034,6 +1037,9 @@ extern unsigned long rxrpc_idle_ack_delay; extern unsigned int rxrpc_rx_window_size; extern unsigned int rxrpc_rx_mtu; extern unsigned int rxrpc_rx_jumbo_max; +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY +extern unsigned long rxrpc_inject_rx_delay; +#endif /* * net_ns.c diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index c0751ff3712b..163b3414563c 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -36,7 +36,14 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb) skb->mark = RXRPC_SKB_MARK_PACKET; rxrpc_new_skb(skb, rxrpc_skb_new_encap_rcv); - skb_queue_tail(&local->rx_queue, skb); +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + if (rxrpc_inject_rx_delay || + !skb_queue_empty(&local->rx_delay_queue)) { + skb->tstamp = ktime_add_ms(skb->tstamp, rxrpc_inject_rx_delay); + skb_queue_tail(&local->rx_delay_queue, skb); + } else +#endif + skb_queue_tail(&local->rx_queue, skb); rxrpc_wake_up_io_thread(local); return 0; } @@ -425,6 +432,9 @@ int rxrpc_io_thread(void *data) struct rxrpc_local *local = data; struct rxrpc_call *call; struct sk_buff *skb; +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + ktime_t now; +#endif skb_queue_head_init(&rx_queue); @@ -468,6 +478,17 @@ int rxrpc_io_thread(void *data) continue; } + /* Inject a delay into packets if requested. */ +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + now = ktime_get_real(); + while ((skb = skb_peek(&local->rx_delay_queue))) { + if (ktime_before(now, skb->tstamp)) + break; + skb = skb_dequeue(&local->rx_delay_queue); + skb_queue_tail(&local->rx_queue, skb); + } +#endif + if (!skb_queue_empty(&local->rx_queue)) { spin_lock_irq(&local->rx_queue.lock); skb_queue_splice_tail_init(&local->rx_queue, &rx_queue); @@ -484,6 +505,27 @@ int rxrpc_io_thread(void *data) if (kthread_should_stop()) break; + +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + if ((skb = skb_peek(&local->rx_delay_queue))) { + unsigned long timeout; + ktime_t tstamp = skb->tstamp; + ktime_t now = ktime_get_real(); + s64 delay_ns = ktime_to_ns(ktime_sub(tstamp, now)); + + if (delay_ns <= 0) { + __set_current_state(TASK_RUNNING); + continue; + } + + timeout = nsecs_to_jiffies(delay_ns); + timeout = max(timeout, 1UL); + schedule_timeout(timeout); + __set_current_state(TASK_RUNNING); + continue; + } +#endif + schedule(); } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 44222923c0d1..b35628ca57a5 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -97,6 +97,9 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, local->rxnet = rxnet; INIT_HLIST_NODE(&local->link); init_rwsem(&local->defrag_sem); +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + skb_queue_head_init(&local->rx_delay_queue); +#endif skb_queue_head_init(&local->rx_queue); INIT_LIST_HEAD(&local->call_attend_q); local->client_bundles = RB_ROOT; @@ -403,6 +406,9 @@ void rxrpc_destroy_local(struct rxrpc_local *local) /* At this point, there should be no more packets coming in to the * local endpoint. */ +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + rxrpc_purge_queue(&local->rx_delay_queue); +#endif rxrpc_purge_queue(&local->rx_queue); } diff --git a/net/rxrpc/misc.c b/net/rxrpc/misc.c index 056c428d8bf3..658a2b7bf94e 100644 --- a/net/rxrpc/misc.c +++ b/net/rxrpc/misc.c @@ -53,3 +53,10 @@ unsigned int rxrpc_rx_mtu = 5692; * sender that we're willing to handle. */ unsigned int rxrpc_rx_jumbo_max = 4; + +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY +/* + * The delay to inject into packet reception. + */ +unsigned long rxrpc_inject_rx_delay = 0; +#endif diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c index cde3224a5cd2..ecaeb4ecfb58 100644 --- a/net/rxrpc/sysctl.c +++ b/net/rxrpc/sysctl.c @@ -17,6 +17,9 @@ static const unsigned int n_65535 = 65535; static const unsigned int n_max_acks = 255; static const unsigned long one_jiffy = 1; static const unsigned long max_jiffies = MAX_JIFFY_OFFSET; +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY +static const unsigned long max_500 = 500; +#endif /* * RxRPC operating parameters. @@ -63,6 +66,19 @@ static struct ctl_table rxrpc_sysctl_table[] = { .extra2 = (void *)&max_jiffies, }, + /* Values used in milliseconds */ +#ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY + { + .procname = "inject_rx_delay", + .data = &rxrpc_inject_rx_delay, + .maxlen = sizeof(unsigned long), + .mode = 0644, + .proc_handler = proc_doulongvec_minmax, + .extra1 = (void *)SYSCTL_LONG_ZERO, + .extra2 = (void *)&max_500, + }, +#endif + /* Non-time values */ { .procname = "reap_client_conns", @@ -109,7 +125,6 @@ static struct ctl_table rxrpc_sysctl_table[] = { .extra1 = (void *)SYSCTL_ONE, .extra2 = (void *)&four, }, - { } }; From patchwork Tue Dec 6 15:59:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30405 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2905780wrr; Tue, 6 Dec 2022 08:04:51 -0800 (PST) X-Google-Smtp-Source: AA0mqf5XspkZdK6+Z6AhjFmL7rbLUxuommDuKX1Ht5kE3SS7gMY2xVzVWCPZCNgX8miam8kdsmMU X-Received: by 2002:a05:6402:e0d:b0:466:4168:6ea7 with SMTP id h13-20020a0564020e0d00b0046641686ea7mr17542070edh.273.1670342691088; Tue, 06 Dec 2022 08:04:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342691; cv=none; d=google.com; s=arc-20160816; b=Oano7+A6fRw9BbxAenBR6icNZpUCfAZqhGeZ1dPSIUAMFHpaEkF43m699SsZiXD8VN EKT9WjYyd2mQQaaUhZv8XWmQcjFMJIY734TJTH93E4UltCuc45HVEBvp9gML+EEaymev OlDka4RGzg8MRFoL+VRB6juShMvn3CSgM+6JmtqY1gu+k8RVEGH7KjZz4NVTeJOKfIZi fSJRnlnoH7hGMnyB824aq0jAPvGVmzrYzlY001ROpip0uQaycVgCB9ayE4xbd0JAY5wo VjM8kVjyyWLHzEnyK3DwljQMHx5dVySA8sPUEs/1hgU3mav6CHlYt/0cR34FJYyiVEoq PErA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=nqloZNwxTXUS3t/wAesCkZJpHZe7dBojOkrMlTPIvcM=; b=i0KJb5M69eXmkDhWB0RdaZn/9Dslz6xmtaTHw4kelrND5El/JoOLOM1FwW/bkSAF7Y Z95uNLvbqJsFULKcBm3+OQmjpBBPGud1JawOAsbRF4b3jR++jh6cyueNhfzPqtWHOy0E geBGmxgF/QbzAhbkwQYCW/j/qV3NNMyfVjg1l170nc0lLryNxoPJASXj1MqKKoKdSoSm ujVY5wpr/9XfCEGLlnt/MVo2xVIlgMXzte/XAd9fC5br4msGbYfwZNPIS9wQ2h2IuaAy NobXIK7mX+8bWZUIy3hTFCdo9f5Y8W6mKb2dB4BtrH5BM13JHiz6dNK7gihWP0ISJLME QHWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W1xk3NsP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h3-20020a056402280300b0046b6da7e857si2504128ede.313.2022.12.06.08.04.18; Tue, 06 Dec 2022 08:04:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W1xk3NsP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232572AbiLFQBn (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234663AbiLFQBM (ORCPT ); Tue, 6 Dec 2022 11:01:12 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 717C22E68E for ; Tue, 6 Dec 2022 08:00:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342405; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nqloZNwxTXUS3t/wAesCkZJpHZe7dBojOkrMlTPIvcM=; b=W1xk3NsPltkfRhhNTx2eyxauaB71464X7GSX3zTg55T8nK0iABoE3WNs9ez/pxpeCrOV6m IWFsbNY4OK+bZlyiNgNpG5dNDylwHLiBMQPARpN+nclH0gyGBOqVQqBZlXwMwpKzcquxwa dKIFEoe+c+LqQGOMj0JuBmv5piGU9G8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-479-qsStTfa8PRS8yQToYP2zqA-1; Tue, 06 Dec 2022 11:00:04 -0500 X-MC-Unique: qsStTfa8PRS8yQToYP2zqA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 53A363810D39; Tue, 6 Dec 2022 16:00:03 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 95CC6492B07; Tue, 6 Dec 2022 16:00:02 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 10/32] rxrpc: Generate extra pings for RTT during heavy-receive call From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 15:59:59 +0000 Message-ID: <167034239985.1105287.15155402480779520338.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481257483184679?= X-GMAIL-MSGID: =?utf-8?q?1751481257483184679?= When doing a call that has a single transmitted data packet and a massive amount of received data packets, we only ping for one RTT sample, which means we don't get a good reading on it. Fix this by converting occasional IDLE ACKs into PING ACKs to elicit a response. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 3 ++- net/rxrpc/call_event.c | 15 ++++++++++++--- net/rxrpc/output.c | 7 +++++-- 3 files changed, 19 insertions(+), 6 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 49a0f799cdef..c47954aea7be 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -258,11 +258,12 @@ EM(rxrpc_propose_ack_client_tx_end, "ClTxEnd") \ EM(rxrpc_propose_ack_input_data, "DataIn ") \ EM(rxrpc_propose_ack_input_data_hole, "DataInH") \ - EM(rxrpc_propose_ack_ping_for_check_life, "ChkLife") \ EM(rxrpc_propose_ack_ping_for_keepalive, "KeepAlv") \ EM(rxrpc_propose_ack_ping_for_lost_ack, "LostAck") \ EM(rxrpc_propose_ack_ping_for_lost_reply, "LostRpl") \ + EM(rxrpc_propose_ack_ping_for_old_rtt, "OldRtt ") \ EM(rxrpc_propose_ack_ping_for_params, "Params ") \ + EM(rxrpc_propose_ack_ping_for_rtt, "Rtt ") \ EM(rxrpc_propose_ack_processing_op, "ProcOp ") \ EM(rxrpc_propose_ack_respond_to_ack, "Rsp2Ack") \ EM(rxrpc_propose_ack_respond_to_ping, "Rsp2Png") \ diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index bf6858e69187..768bc8a63038 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -460,9 +460,18 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0, rxrpc_propose_ack_rx_idle); - if (atomic_read(&call->ackr_nr_unacked) > 2) - rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0, - rxrpc_propose_ack_input_data); + if (atomic_read(&call->ackr_nr_unacked) > 2) { + if (call->peer->rtt_count < 3) + rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, + rxrpc_propose_ack_ping_for_rtt); + else if(ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), + ktime_get_real())) + rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, + rxrpc_propose_ack_ping_for_old_rtt); + else + rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0, + rxrpc_propose_ack_input_data); + } /* Make sure the timer is restarted */ if (call->state != RXRPC_CALL_COMPLETE) { diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 3d8c9f830ee0..2a44958b1bc7 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -253,12 +253,15 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len); ret = do_udp_sendmsg(conn->local->socket, &msg, len); call->peer->last_tx_at = ktime_get_seconds(); - if (ret < 0) + if (ret < 0) { trace_rxrpc_tx_fail(call->debug_id, serial, ret, rxrpc_tx_point_call_ack); - else + } else { trace_rxrpc_tx_packet(call->debug_id, &txb->wire, rxrpc_tx_point_call_ack); + if (txb->wire.flags & RXRPC_REQUEST_ACK) + call->peer->rtt_last_req = ktime_get_real(); + } rxrpc_tx_backoff(call, ret); if (call->state < RXRPC_CALL_COMPLETE) { From patchwork Tue Dec 6 16:00:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30401 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2905447wrr; Tue, 6 Dec 2022 08:04:27 -0800 (PST) X-Google-Smtp-Source: AA0mqf6gkp0++bdPrZEYm/yEJ7mquql2wRUif5P8jqUOwuwiUFC+XLQtrSCaR0D+iGL/DQeP6DAQ X-Received: by 2002:a17:906:5649:b0:7ad:a2ef:c62 with SMTP id v9-20020a170906564900b007ada2ef0c62mr12121396ejr.126.1670342667597; Tue, 06 Dec 2022 08:04:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342667; cv=none; d=google.com; s=arc-20160816; b=MuUYatpi39qphlDY2vk6pqDBJCP0OGTI8qWaX/JGfXLnNxVGCipMY5ISnyR303KCLt 0HuQ2cirpyZy/B2jEYzBfGi3c0Pn71/6nNuZudM5cj1HMbC1JJn9pfUihcDUrjWr0jy6 0nq6OA2mSBhLSSDzqMGwv4ljWzauzYuEHiuLXiPLimsGVypfMNXCMZTs2oU5FYqnrHXY RWewNAYb4805fQFUiRBe1mHCzNNMc1pEt/eKsejTspClllD/bnWdiJMvXEUCCxi5xqLH 7qy15osc+aysrQi7qs8K5Z0UGmW34Ec5B/1ef2Ii+G2yRoprML2Z2i1F1K+XwTpFxNPJ BqsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=hZpNsLAFjJxrJFi5y660HConZiCV92c0NQHWEiStf24=; b=uF1ZSOygPvI6Sk3qOQEMdYRmBplbs7eQ6F6/pTagWxScEd5qNoNE/cGL+n86JCy3Qq MIulv8G5bOLsA9CQb2YzSUh3w9fTjNmTPNP5wt98o6RpORoDPkGGkbXHvA1IiJOngEHa ks7XD92HFUi5W7R8b/9brqa38Xo8qWBuiuqhobFrp/lBHhnp60YuU8kmwdWp4AGTlbWG mB82jPLhAfpEgZmV7FAo6UMMn+P1ubZSeCIRInVXcjLTiTAbx5w4FG2wEn6dNzHxIlIR it+yiHryxXglrZ8MNW75I7o7nzzH8NLPoZ2BXsnOhA38TRor7fp0x144+xWpxOI3ayuO qkEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aBLnGzFG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i25-20020a0564020f1900b0046c58e79738si2267250eda.572.2022.12.06.08.03.53; Tue, 06 Dec 2022 08:04:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aBLnGzFG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234957AbiLFQBr (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234866AbiLFQBO (ORCPT ); Tue, 6 Dec 2022 11:01:14 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFC7E240B3 for ; Tue, 6 Dec 2022 08:00:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342414; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hZpNsLAFjJxrJFi5y660HConZiCV92c0NQHWEiStf24=; b=aBLnGzFGe+yJBnELYa6pu9owcRcAXldnBDTbmGnVrBtnZuX0EoXtD5gkpTbQ2lNsVx4mqL 8xbZqVlB4HKiuYTUz8vUST5GPpFxPwv1n5gvNywBu+85vd/hJskNPX1I1e64fe8J2A7FX/ uZn8WDJSKBRChDj6QnVu8SapgeNiiy8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-1-S0_0ZdC2MHOdI2pnGisG2Q-1; Tue, 06 Dec 2022 11:00:12 -0500 X-MC-Unique: S0_0ZdC2MHOdI2pnGisG2Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F352A3C0F433; Tue, 6 Dec 2022 16:00:11 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D6A41121315; Tue, 6 Dec 2022 16:00:11 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 11/32] rxrpc: De-atomic call->ackr_window and call->ackr_nr_unacked From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:00:08 +0000 Message-ID: <167034240849.1105287.18042839739468519619.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481233153307651?= X-GMAIL-MSGID: =?utf-8?q?1751481233153307651?= call->ackr_window doesn't need to be atomic as ACK generation and ACK transmission are now done in the same thread, so drop the atomic64 handling and split it into two separate members. Similarly, call->ackr_nr_unacked doesn't need to be atomic now either. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 10 ++++++---- net/rxrpc/ar-internal.h | 5 +++-- net/rxrpc/call_event.c | 2 +- net/rxrpc/call_object.c | 3 ++- net/rxrpc/input.c | 14 +++++++------- net/rxrpc/output.c | 13 +++++-------- net/rxrpc/proc.c | 4 +--- net/rxrpc/recvmsg.c | 4 ++-- 8 files changed, 27 insertions(+), 28 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index c47954aea7be..0b69a10bde38 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -1044,7 +1044,8 @@ TRACE_EVENT(rxrpc_receive, __field(enum rxrpc_receive_trace, why ) __field(rxrpc_serial_t, serial ) __field(rxrpc_seq_t, seq ) - __field(u64, window ) + __field(rxrpc_seq_t, window ) + __field(rxrpc_seq_t, wtop ) ), TP_fast_assign( @@ -1052,7 +1053,8 @@ TRACE_EVENT(rxrpc_receive, __entry->why = why; __entry->serial = serial; __entry->seq = seq; - __entry->window = atomic64_read(&call->ackr_window); + __entry->window = call->ackr_window; + __entry->wtop = call->ackr_wtop; ), TP_printk("c=%08x %s r=%08x q=%08x w=%08x-%08x", @@ -1060,8 +1062,8 @@ TRACE_EVENT(rxrpc_receive, __print_symbolic(__entry->why, rxrpc_receive_traces), __entry->serial, __entry->seq, - lower_32_bits(__entry->window), - upper_32_bits(__entry->window)) + __entry->window, + __entry->wtop) ); TRACE_EVENT(rxrpc_recvmsg, diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index ddba8048b7cb..fbfee05f0558 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -682,8 +682,9 @@ struct rxrpc_call { /* Receive-phase ACK management (ACKs we send). */ u8 ackr_reason; /* reason to ACK */ rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ - atomic64_t ackr_window; /* Base (in LSW) and top (in MSW) of SACK window */ - atomic_t ackr_nr_unacked; /* Number of unacked packets */ + rxrpc_seq_t ackr_window; /* Base of SACK window */ + rxrpc_seq_t ackr_wtop; /* Base of SACK window */ + unsigned int ackr_nr_unacked; /* Number of unacked packets */ atomic_t ackr_nr_consumed; /* Number of packets needing hard ACK */ struct { #define RXRPC_SACK_SIZE 256 diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 768bc8a63038..60bec6feba4a 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -460,7 +460,7 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0, rxrpc_propose_ack_rx_idle); - if (atomic_read(&call->ackr_nr_unacked) > 2) { + if (call->ackr_nr_unacked > 2) { if (call->peer->rtt_count < 3) rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_rtt); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index a9c77be9107a..f31ba1c7d103 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -168,7 +168,8 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, call->tx_total_len = -1; call->next_rx_timo = 20 * HZ; call->next_req_timo = 1 * HZ; - atomic64_set(&call->ackr_window, 0x100000001ULL); + call->ackr_window = 1; + call->ackr_wtop = 1; memset(&call->sock_node, 0xed, sizeof(call->sock_node)); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index a72fd2f78fc0..c7a9c5c931de 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -321,7 +321,8 @@ static bool rxrpc_receiving_reply(struct rxrpc_call *call) static void rxrpc_input_update_ack_window(struct rxrpc_call *call, rxrpc_seq_t window, rxrpc_seq_t wtop) { - atomic64_set_release(&call->ackr_window, ((u64)wtop) << 32 | window); + call->ackr_window = window; + call->ackr_wtop = wtop; } /* @@ -349,9 +350,8 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct sk_buff *oos; rxrpc_serial_t serial = sp->hdr.serial; - u64 win = atomic64_read(&call->ackr_window); - rxrpc_seq_t window = lower_32_bits(win); - rxrpc_seq_t wtop = upper_32_bits(win); + rxrpc_seq_t window = call->ackr_window; + rxrpc_seq_t wtop = call->ackr_wtop; rxrpc_seq_t wlimit = window + call->rx_winsize - 1; rxrpc_seq_t seq = sp->hdr.seq; bool last = sp->hdr.flags & RXRPC_LAST_PACKET; @@ -404,7 +404,7 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, else if (!skb_queue_empty(&call->rx_oos_queue)) ack_reason = RXRPC_ACK_DELAY; else - atomic_inc_return(&call->ackr_nr_unacked); + call->ackr_nr_unacked++; window++; if (after(window, wtop)) @@ -553,8 +553,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_serial_t serial = sp->hdr.serial; rxrpc_seq_t seq0 = sp->hdr.seq; - _enter("{%llx,%x},{%u,%x}", - atomic64_read(&call->ackr_window), call->rx_highest_seq, + _enter("{%x,%x,%x},{%u,%x}", + call->ackr_window, call->ackr_wtop, call->rx_highest_seq, skb->len, seq0); state = READ_ONCE(call->state); diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 2a44958b1bc7..3e132d9371ee 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -86,20 +86,18 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, unsigned int qsize; rxrpc_seq_t window, wtop, wrap_point, ix, first; int rsize; - u64 wtmp; u32 mtu, jmax; u8 *ackp = txb->acks; u8 sack_buffer[sizeof(call->ackr_sack_table)] __aligned(8); - atomic_set(&call->ackr_nr_unacked, 0); + call->ackr_nr_unacked = 0; atomic_set(&call->ackr_nr_consumed, 0); rxrpc_inc_stat(call->rxnet, stat_tx_ack_fill); /* Barrier against rxrpc_input_data(). */ retry: - wtmp = atomic64_read_acquire(&call->ackr_window); - window = lower_32_bits(wtmp); - wtop = upper_32_bits(wtmp); + window = call->ackr_window; + wtop = call->ackr_wtop; txb->ack.firstPacket = htonl(window); txb->ack.nAcks = 0; @@ -111,9 +109,8 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, */ memcpy(sack_buffer, call->ackr_sack_table, sizeof(sack_buffer)); wrap_point = window + RXRPC_SACK_SIZE - 1; - wtmp = atomic64_read_acquire(&call->ackr_window); - window = lower_32_bits(wtmp); - wtop = upper_32_bits(wtmp); + window = call->ackr_window; + wtop = call->ackr_wtop; if (after(wtop, wrap_point)) { cond_resched(); goto retry; diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index 3a59591ec061..38ea7df3fc42 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -54,7 +54,6 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) unsigned long timeout = 0; rxrpc_seq_t acks_hard_ack; char lbuff[50], rbuff[50]; - u64 wtmp; if (v == &rxnet->calls) { seq_puts(seq, @@ -81,7 +80,6 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) } acks_hard_ack = READ_ONCE(call->acks_hard_ack); - wtmp = atomic64_read_acquire(&call->ackr_window); seq_printf(seq, "UDP %-47.47s %-47.47s %4x %08x %08x %s %3u" " %-8.8s %08x %08x %08x %02x %08x %02x %08x %02x %06lx\n", @@ -96,7 +94,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) call->abort_code, call->debug_id, acks_hard_ack, READ_ONCE(call->tx_top) - acks_hard_ack, - lower_32_bits(wtmp), upper_32_bits(wtmp) - lower_32_bits(wtmp), + call->ackr_window, call->ackr_wtop - call->ackr_window, call->rx_serial, call->cong_cwnd, timeout); diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 88298ab8a9d7..56a31aebea38 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -177,7 +177,7 @@ static int rxrpc_recvmsg_term(struct rxrpc_call *call, struct msghdr *msg) } trace_rxrpc_recvdata(call, rxrpc_recvmsg_terminal, - lower_32_bits(atomic64_read(&call->ackr_window)) - 1, + call->ackr_window - 1, call->rx_pkt_offset, call->rx_pkt_len, ret); return ret; } @@ -293,7 +293,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, rx_pkt_len = call->rx_pkt_len; if (call->state >= RXRPC_CALL_SERVER_ACK_REQUEST) { - seq = lower_32_bits(atomic64_read(&call->ackr_window)) - 1; + seq = call->ackr_window - 1; ret = 1; goto done; } From patchwork Tue Dec 6 16:00:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30403 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2905634wrr; Tue, 6 Dec 2022 08:04:41 -0800 (PST) X-Google-Smtp-Source: AA0mqf4TPePN3zjgkzYT685w/438qQiezYWN3p4FHrDgO9+FLPfb5SC5dtMkLeDv8EgjUWfU3ybC X-Received: by 2002:a17:906:29ca:b0:7c0:d0da:d9c8 with SMTP id y10-20020a17090629ca00b007c0d0dad9c8mr13559139eje.712.1670342681445; Tue, 06 Dec 2022 08:04:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342681; cv=none; d=google.com; s=arc-20160816; b=F/WZPjowrcJKJUrLDFNqGLdfwZsIMkVnZhhp1n91hyvWEyQyLnvOglDPaFiA9nAM2o xbPHu1NeePEM+P1q57MRT25eu8EV+fiPXZNBtELHfsTDHqW6PmilpKws00zbUoCMI+Dg z+gDcjQHEB5U//hQWDWgPayqZIjcEmWAqEODOCoT3YmhI0SkxLK+CWx/98wvnE0QECsE UTH42uddthz4zGsHP/2HLTrFlrsxUs6xkX1QjHkeLzHQzJdZVIZPAJpN06rsigYPSxaO RldldvvBvg0gyw0rsmIFLQuvlpCsbHHPZQ2mOnlW1rbwXLP0RqEnC/olBUpl5gtveISY GuUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=Ud0BR5lK0w68f4Q4/GF3xMbOf+RWUyGnWV6DsI8fweI=; b=JrEKf7/jGS/hyKERvqTjyXqFnSID5jZNKwwqdy7NGLzfzpwC9/ORhKku7r1f/kXPqE sgms4pqY6gXbtvgM/9MDueC3LpcrrpYkx8XOGIeYmmUbLrNyF2Wx9Ep1Jo6fKogLSo2I O5/TOIxvCaaYq4v8N9zEpb8O7//8SxNHR+MclY926vcoHq3N6qy78RaaXNzudQnvGI0U +WYdeysXz4WnRAZMgEkQdwE1q+4GtCdzO1EpM2m+ysvjd0jqJznH/qsZuAwiAmxLif6H 8AOT0ZzuPbMuW65EkRuwkC06krw8etlhc4UFtn5PWpU1OjDe+3xY9Nb4IpCwQCgHqVAA zxyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=QDmDB0zZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e8-20020a056402088800b0045dc9b4c034si2137204edy.582.2022.12.06.08.04.07; Tue, 06 Dec 2022 08:04:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=QDmDB0zZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235118AbiLFQBz (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235153AbiLFQBX (ORCPT ); Tue, 6 Dec 2022 11:01:23 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 456432ED60 for ; Tue, 6 Dec 2022 08:00:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342422; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ud0BR5lK0w68f4Q4/GF3xMbOf+RWUyGnWV6DsI8fweI=; b=QDmDB0zZedivU6xUxWcG3d6lzM4DK1leEDVYeRzlnr8OKz0/lpel6wkn8siIeYtlcboM4h 2DmnYnJ9vHRiX915jfCV1Fb3Yp8WBpLjgl2SyJcOHbRrVbZfRQUj/hTKqU5UOP3u1vQlaG YNVTwWY5KMir9sNRQVRLvtUL7W9qytw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-414-eOSbI8JXPceWiek0wSXd4A-1; Tue, 06 Dec 2022 11:00:20 -0500 X-MC-Unique: eOSbI8JXPceWiek0wSXd4A-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 76A93101E148; Tue, 6 Dec 2022 16:00:20 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id B6B1D40C6E16; Tue, 6 Dec 2022 16:00:19 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 12/32] rxrpc: Simplify ACK handling From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:00:17 +0000 Message-ID: <167034241714.1105287.4231275547126287090.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481247256316917?= X-GMAIL-MSGID: =?utf-8?q?1751481247256316917?= Now that general ACK transmission is done from the same thread as incoming DATA packet wrangling, there's no possibility that the SACK table will be being updated by the latter whilst the former is trying to copy it to an ACK. This means that we can safely rotate the SACK table whilst updating it without having to take a lock, rather than keeping all the bits inside it in fixed place and copying and then rotating it in the transmitter. Therefore, simplify SACK handing by keeping track of starting point in the ring and rotate slots down as we consume them. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 36 +++++++++++++++++++++++++++++++++ net/rxrpc/ar-internal.h | 1 + net/rxrpc/input.c | 46 +++++++++++++++++++++--------------------- net/rxrpc/output.c | 46 ++++++++++++------------------------------ 4 files changed, 73 insertions(+), 56 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 0b69a10bde38..61b4dbe9f4a8 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -16,6 +16,13 @@ /* * Declare tracing information enums and their string mappings for display. */ +#define rxrpc_sack_traces \ + EM(rxrpc_sack_advance, "ADV") \ + EM(rxrpc_sack_fill, "FIL") \ + EM(rxrpc_sack_nack, "NAK") \ + EM(rxrpc_sack_none, "---") \ + E_(rxrpc_sack_oos, "OOS") + #define rxrpc_call_poke_traces \ EM(rxrpc_call_poke_abort, "Abort") \ EM(rxrpc_call_poke_complete, "Compl") \ @@ -394,6 +401,7 @@ enum rxrpc_recvmsg_trace { rxrpc_recvmsg_traces } __mode(byte); enum rxrpc_req_ack_trace { rxrpc_req_ack_traces } __mode(byte); enum rxrpc_rtt_rx_trace { rxrpc_rtt_rx_traces } __mode(byte); enum rxrpc_rtt_tx_trace { rxrpc_rtt_tx_traces } __mode(byte); +enum rxrpc_sack_trace { rxrpc_sack_traces } __mode(byte); enum rxrpc_skb_trace { rxrpc_skb_traces } __mode(byte); enum rxrpc_timer_trace { rxrpc_timer_traces } __mode(byte); enum rxrpc_tx_point { rxrpc_tx_points } __mode(byte); @@ -424,6 +432,7 @@ rxrpc_recvmsg_traces; rxrpc_req_ack_traces; rxrpc_rtt_rx_traces; rxrpc_rtt_tx_traces; +rxrpc_sack_traces; rxrpc_skb_traces; rxrpc_timer_traces; rxrpc_tx_points; @@ -1845,6 +1854,33 @@ TRACE_EVENT(rxrpc_call_poked, __entry->call_debug_id) ); +TRACE_EVENT(rxrpc_sack, + TP_PROTO(struct rxrpc_call *call, rxrpc_seq_t seq, + unsigned int sack, enum rxrpc_sack_trace what), + + TP_ARGS(call, seq, sack, what), + + TP_STRUCT__entry( + __field(unsigned int, call_debug_id ) + __field(rxrpc_seq_t, seq ) + __field(unsigned int, sack ) + __field(enum rxrpc_sack_trace, what ) + ), + + TP_fast_assign( + __entry->call_debug_id = call->debug_id; + __entry->seq = seq; + __entry->sack = sack; + __entry->what = what; + ), + + TP_printk("c=%08x q=%08x %s k=%x", + __entry->call_debug_id, + __entry->seq, + __print_symbolic(__entry->what, rxrpc_sack_traces), + __entry->sack) + ); + #undef EM #undef E_ #endif /* _TRACE_RXRPC_H */ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index fbfee05f0558..092413e2b12a 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -681,6 +681,7 @@ struct rxrpc_call { /* Receive-phase ACK management (ACKs we send). */ u8 ackr_reason; /* reason to ACK */ + u16 ackr_sack_base; /* Starting slot in SACK table ring */ rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ rxrpc_seq_t ackr_window; /* Base of SACK window */ rxrpc_seq_t ackr_wtop; /* Base of SACK window */ diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index c7a9c5c931de..423b1839c06d 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -350,6 +350,7 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct sk_buff *oos; rxrpc_serial_t serial = sp->hdr.serial; + unsigned int sack = call->ackr_sack_base; rxrpc_seq_t window = call->ackr_window; rxrpc_seq_t wtop = call->ackr_wtop; rxrpc_seq_t wlimit = window + call->rx_winsize - 1; @@ -395,9 +396,6 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, /* Queue the packet. */ if (seq == window) { - rxrpc_seq_t reset_from; - bool reset_sack = false; - if (sp->hdr.flags & RXRPC_REQUEST_ACK) ack_reason = RXRPC_ACK_REQUESTED; /* Send an immediate ACK if we fill in a hole */ @@ -407,8 +405,14 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, call->ackr_nr_unacked++; window++; - if (after(window, wtop)) + if (after(window, wtop)) { + trace_rxrpc_sack(call, seq, sack, rxrpc_sack_none); wtop = window; + } else { + trace_rxrpc_sack(call, seq, sack, rxrpc_sack_advance); + sack = (sack + 1) % RXRPC_SACK_SIZE; + } + rxrpc_get_skb(skb, rxrpc_skb_get_to_recvmsg); @@ -425,43 +429,39 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, __skb_unlink(oos, &call->rx_oos_queue); last = osp->hdr.flags & RXRPC_LAST_PACKET; seq = osp->hdr.seq; - if (!reset_sack) { - reset_from = seq; - reset_sack = true; - } + call->ackr_sack_table[sack] = 0; + trace_rxrpc_sack(call, seq, sack, rxrpc_sack_fill); + sack = (sack + 1) % RXRPC_SACK_SIZE; window++; rxrpc_input_queue_data(call, oos, window, wtop, - rxrpc_receive_queue_oos); + rxrpc_receive_queue_oos); } spin_unlock(&call->recvmsg_queue.lock); - if (reset_sack) { - do { - call->ackr_sack_table[reset_from % RXRPC_SACK_SIZE] = 0; - } while (reset_from++, before(reset_from, window)); - } + call->ackr_sack_base = sack; } else { - bool keep = false; + unsigned int slot; ack_reason = RXRPC_ACK_OUT_OF_SEQUENCE; - if (!call->ackr_sack_table[seq % RXRPC_SACK_SIZE]) { - call->ackr_sack_table[seq % RXRPC_SACK_SIZE] = 1; - keep = 1; + slot = seq - window; + sack = (sack + slot) % RXRPC_SACK_SIZE; + + if (call->ackr_sack_table[sack % RXRPC_SACK_SIZE]) { + ack_reason = RXRPC_ACK_DUPLICATE; + goto send_ack; } + call->ackr_sack_table[sack % RXRPC_SACK_SIZE] |= 1; + trace_rxrpc_sack(call, seq, sack, rxrpc_sack_oos); + if (after(seq + 1, wtop)) { wtop = seq + 1; rxrpc_input_update_ack_window(call, window, wtop); } - if (!keep) { - ack_reason = RXRPC_ACK_DUPLICATE; - goto send_ack; - } - skb_queue_walk(&call->rx_oos_queue, oos) { struct rxrpc_skb_priv *osp = rxrpc_skb(oos); diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 3e132d9371ee..86dafa41236a 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -83,56 +83,36 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, struct rxrpc_txbuf *txb) { struct rxrpc_ackinfo ackinfo; - unsigned int qsize; - rxrpc_seq_t window, wtop, wrap_point, ix, first; + unsigned int qsize, sack, wrap, to; + rxrpc_seq_t window, wtop; int rsize; u32 mtu, jmax; u8 *ackp = txb->acks; - u8 sack_buffer[sizeof(call->ackr_sack_table)] __aligned(8); call->ackr_nr_unacked = 0; atomic_set(&call->ackr_nr_consumed, 0); rxrpc_inc_stat(call->rxnet, stat_tx_ack_fill); + clear_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags); - /* Barrier against rxrpc_input_data(). */ -retry: window = call->ackr_window; wtop = call->ackr_wtop; + sack = call->ackr_sack_base % RXRPC_SACK_SIZE; txb->ack.firstPacket = htonl(window); - txb->ack.nAcks = 0; + txb->ack.nAcks = wtop - window; if (after(wtop, window)) { - /* Try to copy the SACK ring locklessly. We can use the copy, - * only if the now-current top of the window didn't go past the - * previously read base - otherwise we can't know whether we - * have old data or new data. - */ - memcpy(sack_buffer, call->ackr_sack_table, sizeof(sack_buffer)); - wrap_point = window + RXRPC_SACK_SIZE - 1; - window = call->ackr_window; - wtop = call->ackr_wtop; - if (after(wtop, wrap_point)) { - cond_resched(); - goto retry; - } - - /* The buffer is maintained as a ring with an invariant mapping - * between bit position and sequence number, so we'll probably - * need to rotate it. - */ - txb->ack.nAcks = wtop - window; - ix = window % RXRPC_SACK_SIZE; - first = sizeof(sack_buffer) - ix; + wrap = RXRPC_SACK_SIZE - sack; + to = min_t(unsigned int, txb->ack.nAcks, RXRPC_SACK_SIZE); - if (ix + txb->ack.nAcks <= RXRPC_SACK_SIZE) { - memcpy(txb->acks, sack_buffer + ix, txb->ack.nAcks); + if (sack + txb->ack.nAcks <= RXRPC_SACK_SIZE) { + memcpy(txb->acks, call->ackr_sack_table + sack, txb->ack.nAcks); } else { - memcpy(txb->acks, sack_buffer + ix, first); - memcpy(txb->acks + first, sack_buffer, - txb->ack.nAcks - first); + memcpy(txb->acks, call->ackr_sack_table + sack, wrap); + memcpy(txb->acks + wrap, call->ackr_sack_table, + to - wrap); } - ackp += txb->ack.nAcks; + ackp += to; } else if (before(wtop, window)) { pr_warn("ack window backward %x %x", window, wtop); } else if (txb->ack.reason == RXRPC_ACK_DELAY) { From patchwork Tue Dec 6 16:00:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30402 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2905624wrr; Tue, 6 Dec 2022 08:04:41 -0800 (PST) X-Google-Smtp-Source: AA0mqf6OTCMaVNX2iXFnW/e/yH80XSazsTXHzNciYFLbwyedqRqk8oxnk/UG3kUXyOt6LRxSZhKL X-Received: by 2002:a05:6402:120c:b0:46b:86:20b5 with SMTP id c12-20020a056402120c00b0046b008620b5mr39073006edw.130.1670342680795; Tue, 06 Dec 2022 08:04:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342680; cv=none; d=google.com; s=arc-20160816; b=R2Ohusj8oVSkRp60w7rchcf0LIxGPeyp4vcNSNK+Ge/p8krrbDUDHq0ZizRSK81fdJ EXXNOl2p1TA44eHsOuZOLclQiVNGRV4WnOyCcgtZa77AxN4idXGYoCC4b5YSa2bqGRon oS0iSKR31YwstKyYaARuRzyGnbKD9GgPlGSEM3H8X6QnUVbzfb9IfbCQuBjPqaJ9XMc9 5ZCjKP28OXca7xVxz8WzaR+WjVJqRl+MzRzXeRrqXW41gp2rsVGwnv8nC0J7yrWOCF+r /Q549i3KpcNOaa/DkQ10+A7Fwf0zYyZj4gLRkDfeKJSD5NcYsIYiiwe+Rp3zklYW4Kw+ PPdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=Nm99J5fBRF83ahpphn7gkgsZrt+3d7S4X1otUXEJDXM=; b=kPYWE1N9gmt6uXECG4kwetRqRFY0aMvh5a78WRdu2gN5cdg92thPUZL89E06PRM57j qVzPyC/2fEW3mNhVKsREnL+Sn0v5liOgrkWBn0UHx0mG5bcLRfYvgum7z38RYLrN00W3 PLlzSL9VgjAhkUY99Rk3RuRBSiYYSzOifsmA89/MrTENrXHIj53uCCwADLC0t5sH/aQy hFgILP/2uagXhjsSGXpA66ds4IaAGjj7dG3lXTWqYXBbkSuQr/fU4oUaIu+DX5RORZD8 So7D0+HMLLIHRgd+C+3LBdbwErJa+9Mk/RFy7UShzswkw9K7/rc0vAS8h2FHtNURqSIH aEEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=T6K0H9KS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o11-20020aa7c50b000000b004612915d1a0si1997518edq.507.2022.12.06.08.04.17; Tue, 06 Dec 2022 08:04:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=T6K0H9KS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235259AbiLFQB6 (ORCPT + 99 others); Tue, 6 Dec 2022 11:01:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235187AbiLFQBg (ORCPT ); Tue, 6 Dec 2022 11:01:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35A6F2494C for ; Tue, 6 Dec 2022 08:00:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342433; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Nm99J5fBRF83ahpphn7gkgsZrt+3d7S4X1otUXEJDXM=; b=T6K0H9KSZxmtudgvPU6D7L6vlyDx6CWZt9gLLG+kmg223DUB7XagZLxynf0XSKGu0a5eq1 Avd69E57KPdzwJz2qB9D2JS+DanD6f9RfOnU2ohBnTgbFlCy7xQ+aV5/Px3/eECklOa0H3 mVdSNkqqWZmOFoIZazid0mXfl//1kWI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-512-tXxJFZEWN2KDq9frugKPYA-1; Tue, 06 Dec 2022 11:00:29 -0500 X-MC-Unique: tXxJFZEWN2KDq9frugKPYA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D581B3810D3D; Tue, 6 Dec 2022 16:00:28 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3B8272166B26; Tue, 6 Dec 2022 16:00:28 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 13/32] rxrpc: Don't lock call->tx_lock to access call->tx_buffer From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:00:25 +0000 Message-ID: <167034242563.1105287.10561421349650229204.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481247000141636?= X-GMAIL-MSGID: =?utf-8?q?1751481247000141636?= call->tx_buffer is now only accessed within the I/O thread (->tx_sendmsg is the way sendmsg passes packets to the I/O thread) so there's no need to lock around it. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/txbuf.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index d2cf2aac3adb..d43be8512386 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -110,12 +110,8 @@ void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *call) _enter("%x/%x/%x", call->tx_bottom, call->acks_hard_ack, call->tx_top); - for (;;) { - spin_lock(&call->tx_lock); - txb = list_first_entry_or_null(&call->tx_buffer, - struct rxrpc_txbuf, call_link); - if (!txb) - break; + while ((txb = list_first_entry_or_null(&call->tx_buffer, + struct rxrpc_txbuf, call_link))) { hard_ack = smp_load_acquire(&call->acks_hard_ack); if (before(hard_ack, txb->seq)) break; @@ -128,15 +124,11 @@ void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *call) trace_rxrpc_txqueue(call, rxrpc_txqueue_dequeue); - spin_unlock(&call->tx_lock); - rxrpc_put_txbuf(txb, rxrpc_txbuf_put_rotated); if (after(call->acks_hard_ack, call->tx_bottom + 128)) wake = true; } - spin_unlock(&call->tx_lock); - if (wake) wake_up(&call->waitq); } From patchwork Tue Dec 6 16:00:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30406 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2905867wrr; Tue, 6 Dec 2022 08:04:59 -0800 (PST) X-Google-Smtp-Source: AA0mqf7A+QcaryMcqRirH96m1eVeQI/XAMDhARmNYXvXrQyq40EdaL7MCQ5L3fDDvGO+hGUbHGcC X-Received: by 2002:a05:6402:298f:b0:468:f142:3040 with SMTP id eq15-20020a056402298f00b00468f1423040mr77532496edb.107.1670342699497; Tue, 06 Dec 2022 08:04:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342699; cv=none; d=google.com; s=arc-20160816; b=mVSd0XJJtSyOF0Ez8gqM+9uLQ1odk1DrF44pKpZfyQy3gbtROiCKG/lRbG1xmmzHDf OVEkcmaVeLITTlrbMGRpCdzNc78FTRgF+RoDLUsB/t6rZVHCazvUkAtsRyePcA1OjOEY 92eZEJRfmQJztMaedQL7D/Z46Ouz4D4uWRXPJ8sE1ZBKDHe2gjBL7ezjFLjt5ea5DSEl saaHD/VmyWEtUaMZB+t81nk8Slbp3+/pMZ2Ttoi0AGjL40MGzzTIbUTQXMMGZlpmC2o9 UwdjtxPagd3ItUl9cvGkvpFwroVX7xastygrmcZooPTMc60p9VTz8qi5U+ODrz0o2qxA 9Ofg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=fBG0v7ZhfemP2OkST4l4S4HMt7ZN0ON4gfIK4H0n0DE=; b=PREfpPPNufEoJE+xAkBVZ+8rgTzTlSlnRPrlbbXnk0utPgVk6uq51xq6Z4x3Ju96dq 6qxCmeSgrC1CP6wiRK/IRq6llkf0tfHRDWy9kr2JeG8eq2odIIDQgcYsqPMPE+frFe2K e35q38zCRDux7GQrD3FH6MrHSgOe4c5ovCJTkacoLbAc1tqoPjF5aLzl+g2Bi8VTSMMf CrPI+uYaAClxSq3Zn/ipOezk67CuRjB2CS327JRMeXfStV/0SL7g9IRKFTmS/AekfBUk yBftUcEcNOfsS6KkSkMkA8pk/9OnpRNygYi7GQ1JulcxMX6bI3mXM3MCf6vg4AG009gQ ZgPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Zhr8dZvM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e15-20020aa7d7cf000000b0046b807c19b9si1948189eds.190.2022.12.06.08.04.26; Tue, 06 Dec 2022 08:04:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Zhr8dZvM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234083AbiLFQCB (ORCPT + 99 others); Tue, 6 Dec 2022 11:02:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234729AbiLFQBp (ORCPT ); Tue, 6 Dec 2022 11:01:45 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C382D2E9E8 for ; Tue, 6 Dec 2022 08:00:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342441; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fBG0v7ZhfemP2OkST4l4S4HMt7ZN0ON4gfIK4H0n0DE=; b=Zhr8dZvM2xh3SRNfWgJpUE2tTgcGKHzbdHpG2zGsLOLlDhWy5aiVeqqtKZ8Id0WWPT3FEZ /mpgJksLwRRetAoRFWZ6TH7b8/u1SOHNnCrvnUrpP7tqbF3GepRBOrEM0ef2qKXdrsa8YP VFE6jdgR2V2XJOUC83PRbszifN05xUA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-561-R21qLhxlMCSTcpKaNZf71g-1; Tue, 06 Dec 2022 11:00:38 -0500 X-MC-Unique: R21qLhxlMCSTcpKaNZf71g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 633F4802804; Tue, 6 Dec 2022 16:00:37 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id BCDFD2166B26; Tue, 6 Dec 2022 16:00:36 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 14/32] rxrpc: Remove local->defrag_sem From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:00:34 +0000 Message-ID: <167034243403.1105287.2323199364671061536.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481266260587770?= X-GMAIL-MSGID: =?utf-8?q?1751481266260587770?= We no longer need local->defrag_sem as all DATA packet transmission is now done from one thread, so remove it. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 1 - net/rxrpc/local_object.c | 1 - net/rxrpc/output.c | 7 ------- 3 files changed, 9 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 092413e2b12a..29a8803236e8 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -288,7 +288,6 @@ struct rxrpc_local { struct socket *socket; /* my UDP socket */ struct task_struct *io_thread; struct rxrpc_sock __rcu *service; /* Service(s) listening on this endpoint */ - struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */ #ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY struct sk_buff_head rx_delay_queue; /* Delay injection queue */ #endif diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index b35628ca57a5..b77e3076f6d9 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -96,7 +96,6 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, atomic_set(&local->active_users, 1); local->rxnet = rxnet; INIT_HLIST_NODE(&local->link); - init_rwsem(&local->defrag_sem); #ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY skb_queue_head_init(&local->rx_delay_queue); #endif diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 86dafa41236a..4c2c4b9828b8 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -409,8 +409,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (txb->len >= call->peer->maxdata) goto send_fragmentable; - down_read(&conn->local->defrag_sem); - txb->last_sent = ktime_get_real(); if (txb->wire.flags & RXRPC_REQUEST_ACK) rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data); @@ -425,7 +423,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) ret = do_udp_sendmsg(conn->local->socket, &msg, len); conn->peer->last_tx_at = ktime_get_seconds(); - up_read(&conn->local->defrag_sem); if (ret < 0) { rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); rxrpc_cancel_rtt_probe(call, serial, rtt_slot); @@ -486,8 +483,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) /* attempt to send this message with fragmentation enabled */ _debug("send fragment"); - down_write(&conn->local->defrag_sem); - txb->last_sent = ktime_get_real(); if (txb->wire.flags & RXRPC_REQUEST_ACK) rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data); @@ -519,8 +514,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) rxrpc_tx_point_call_data_frag); } rxrpc_tx_backoff(call, ret); - - up_write(&conn->local->defrag_sem); goto done; } From patchwork Tue Dec 6 16:00:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30411 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2906893wrr; Tue, 6 Dec 2022 08:06:19 -0800 (PST) X-Google-Smtp-Source: AA0mqf5XYSo3kYLSwuMaMRUb9/VXQP+Rt9+iuZzfwVVSKGaAUYlecZAuQpKUZj1FZGa2n+G+Ciuz X-Received: by 2002:a17:906:32ce:b0:78d:9022:f146 with SMTP id k14-20020a17090632ce00b0078d9022f146mr56619071ejk.656.1670342779106; Tue, 06 Dec 2022 08:06:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342779; cv=none; d=google.com; s=arc-20160816; b=B/idJ4U53qKGXkkixqXlK/DAwFWKDrAHFZ1ww+5bZurMG1qZlKRnD53tSOPwmlpJ97 J+OYlPfESUNb7HjB7QzixpMsxhzCmArX7/XnKL7L/72FLYDbcfNKv+eo9l+POKfWM6vH u3gvjUvaN/qAXz7hqyd3T1vZMu6omVz3PTogn6hgGkaFdBlKwFu4NLmINJczsCWg/2CT qvcSkIkRjHVFyK96tdfAZshBiLgWpskq1bgR5ibDWy18AfVUYgvRk6VS/gh6WMAsKJwG YZGUJSdS+QnmwrDW75DZPtAGBacMJoW3B4Ax6jN2anbhwJWMzDXiE9VrDgAFgt/QOC/l yIPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=jpzXEMy2p7rHAbk7FdojQDJZrpbwmusgrJ/kp9oL1/Q=; b=Cx3wIhR7gw0D6vpzDGEPAeVgSB9ccTRvS6q/owQmkd7Lu0GWQnH4z3SMKGUJkSNudB xydt7J2v0qhEfZ2M/T8dyDkzxi8MKUDo0W/2JZznf6fEKhZmBxbo5tga34TEe6ejkjfB 7wk326/ftJBjgMzeTaNuTfvF/02zqfuMo8/oIg2SbuvxLUEkg+s3TdKuQkBUB7SjMOf0 BS9i6BcAfocDD0EDqUSIeL4kHGoeim1HzELBP6iyn06qYj1MQoSVIeQt/z1g5GwcXlfJ Jn6iJzVCq4uzb7onSSwz07KbQkXP7qSVB3IvggUrTRYtGq6fYkYd4pxXzMSbdJMAJ2C8 GlsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DTqazEqP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hd16-20020a170907969000b007836ec6999dsi13169446ejc.904.2022.12.06.08.05.49; Tue, 06 Dec 2022 08:06:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DTqazEqP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235168AbiLFQCK (ORCPT + 99 others); Tue, 6 Dec 2022 11:02:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235172AbiLFQB4 (ORCPT ); Tue, 6 Dec 2022 11:01:56 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C4732D753 for ; Tue, 6 Dec 2022 08:00:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342451; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jpzXEMy2p7rHAbk7FdojQDJZrpbwmusgrJ/kp9oL1/Q=; b=DTqazEqP1QXUhJ5265oYol7LodPraW4glW79ZzndzJCQZy15ouFv0hm0Iv63clWWSrYueb qC+2MFWgpdnuC2U02l7jwk0qP7xkOL8xQYYOJXEMaLtCXU+TjnkHhR2cHTJOhPY9Q/8ZTU WNghbvKphFxlk/+pTeyawerCil33d3s= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-28-UELBUkhpNpW-vMblIjxO2A-1; Tue, 06 Dec 2022 11:00:46 -0500 X-MC-Unique: UELBUkhpNpW-vMblIjxO2A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0AC6885A5B6; Tue, 6 Dec 2022 16:00:46 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4C5242166B26; Tue, 6 Dec 2022 16:00:45 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 15/32] rxrpc: Implement a mechanism to send an event notification to a connection From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:00:42 +0000 Message-ID: <167034244255.1105287.14066772950642129050.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481350096461232?= X-GMAIL-MSGID: =?utf-8?q?1751481350096461232?= Provide a means by which an event notification can be sent to a connection through such that the I/O thread can pick it up and handle it rather than doing it in a separate workqueue. This is then used to move the deferred final ACK of a call into the I/O thread rather than a separate work queue as part of the drive to do all transmission from the I/O thread. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 5 ++--- net/rxrpc/ar-internal.h | 5 +++++ net/rxrpc/conn_event.c | 14 ++++++++++---- net/rxrpc/conn_object.c | 20 +++++++++++++++++++- net/rxrpc/io_thread.c | 18 +++++++++++++++++- net/rxrpc/local_object.c | 1 + 6 files changed, 54 insertions(+), 9 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 61b4dbe9f4a8..5a0c44cf1ce6 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -118,7 +118,7 @@ EM(rxrpc_conn_get_call_input, "GET inp-call") \ EM(rxrpc_conn_get_conn_input, "GET inp-conn") \ EM(rxrpc_conn_get_idle, "GET idle ") \ - EM(rxrpc_conn_get_poke, "GET poke ") \ + EM(rxrpc_conn_get_poke_timer, "GET poke ") \ EM(rxrpc_conn_get_service_conn, "GET svc-conn") \ EM(rxrpc_conn_new_client, "NEW client ") \ EM(rxrpc_conn_new_service, "NEW service ") \ @@ -133,10 +133,9 @@ EM(rxrpc_conn_put_service_reaped, "PUT svc-reap") \ EM(rxrpc_conn_put_unbundle, "PUT unbundle") \ EM(rxrpc_conn_put_unidle, "PUT unidle ") \ + EM(rxrpc_conn_put_work, "PUT work ") \ EM(rxrpc_conn_queue_challenge, "QUE chall ") \ - EM(rxrpc_conn_queue_retry_work, "QUE retry-wk") \ EM(rxrpc_conn_queue_rx_work, "QUE rx-work ") \ - EM(rxrpc_conn_queue_timer, "QUE timer ") \ EM(rxrpc_conn_see_new_service_conn, "SEE new-svc ") \ EM(rxrpc_conn_see_reap_service, "SEE reap-svc") \ E_(rxrpc_conn_see_work, "SEE work ") diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 29a8803236e8..70a3fdf62cd1 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -202,6 +202,7 @@ struct rxrpc_host_header { * - max 48 bytes (struct sk_buff::cb) */ struct rxrpc_skb_priv { + struct rxrpc_connection *conn; /* Connection referred to (poke packet) */ u16 offset; /* Offset of data */ u16 len; /* Length of data */ u8 flags; @@ -292,6 +293,7 @@ struct rxrpc_local { struct sk_buff_head rx_delay_queue; /* Delay injection queue */ #endif struct sk_buff_head rx_queue; /* Received packets */ + struct list_head conn_attend_q; /* Conns requiring immediate attention */ struct list_head call_attend_q; /* Calls requiring immediate attention */ struct rb_root client_bundles; /* Client connection bundles by socket params */ spinlock_t client_bundles_lock; /* Lock for client_bundles */ @@ -441,6 +443,7 @@ struct rxrpc_connection { struct rxrpc_peer *peer; /* Remote endpoint */ struct rxrpc_net *rxnet; /* Network namespace to which call belongs */ struct key *key; /* Security details */ + struct list_head attend_link; /* Link in local->conn_attend_q */ refcount_t ref; atomic_t active; /* Active count for service conns */ @@ -907,6 +910,7 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *, struct sk_buff *, void rxrpc_process_connection(struct work_struct *); void rxrpc_process_delayed_final_acks(struct rxrpc_connection *, bool); int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb); +void rxrpc_input_conn_event(struct rxrpc_connection *, struct sk_buff *); /* * conn_object.c @@ -914,6 +918,7 @@ int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb); extern unsigned int rxrpc_connection_expiry; extern unsigned int rxrpc_closed_conn_expiry; +void rxrpc_poke_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *, gfp_t); struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *, struct sockaddr_rxrpc *, diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index f05d58636307..488634f36674 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -412,10 +412,6 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) if (test_and_clear_bit(RXRPC_CONN_EV_CHALLENGE, &conn->events)) rxrpc_secure_connection(conn); - /* Process delayed ACKs whose time has come. */ - if (conn->flags & RXRPC_CONN_FINAL_ACK_MASK) - rxrpc_process_delayed_final_acks(conn, false); - /* go through the conn-level event packets, releasing the ref on this * connection that each one has when we've finished with it */ while ((skb = skb_dequeue(&conn->rx_queue))) { @@ -515,3 +511,13 @@ int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) return -EPROTO; } } + +/* + * Input a connection event. + */ +void rxrpc_input_conn_event(struct rxrpc_connection *conn, struct sk_buff *skb) +{ + /* Process delayed ACKs whose time has come. */ + if (conn->flags & RXRPC_CONN_FINAL_ACK_MASK) + rxrpc_process_delayed_final_acks(conn, false); +} diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 3c8f83dacb2b..b3cb85e0ed70 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -23,12 +23,30 @@ static void rxrpc_clean_up_connection(struct work_struct *work); static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet, unsigned long reap_at); +void rxrpc_poke_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why) +{ + struct rxrpc_local *local = conn->local; + bool busy; + + if (WARN_ON_ONCE(!local)) + return; + + spin_lock_bh(&local->lock); + busy = !list_empty(&conn->attend_link); + if (!busy) { + rxrpc_get_connection(conn, why); + list_add_tail(&conn->attend_link, &local->conn_attend_q); + } + spin_unlock_bh(&local->lock); + rxrpc_wake_up_io_thread(local); +} + static void rxrpc_connection_timer(struct timer_list *timer) { struct rxrpc_connection *conn = container_of(timer, struct rxrpc_connection, timer); - rxrpc_queue_conn(conn, rxrpc_conn_queue_timer); + rxrpc_poke_conn(conn, rxrpc_conn_get_poke_timer); } /* diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 163b3414563c..be23b2dc930f 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -428,6 +428,7 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, */ int rxrpc_io_thread(void *data) { + struct rxrpc_connection *conn; struct sk_buff_head rx_queue; struct rxrpc_local *local = data; struct rxrpc_call *call; @@ -443,6 +444,19 @@ int rxrpc_io_thread(void *data) for (;;) { rxrpc_inc_stat(local->rxnet, stat_io_loop); + /* Deal with connections that want immediate attention. */ + if ((conn = list_first_entry_or_null(&local->conn_attend_q, + struct rxrpc_connection, + attend_link))) { + spin_lock_bh(&local->lock); + list_del_init(&conn->attend_link); + spin_unlock_bh(&local->lock); + + rxrpc_input_conn_event(conn, NULL); + rxrpc_put_connection(conn, rxrpc_conn_put_poke); + continue; + } + /* Deal with calls that want immediate attention. */ if ((call = list_first_entry_or_null(&local->call_attend_q, struct rxrpc_call, @@ -470,6 +484,7 @@ int rxrpc_io_thread(void *data) rxrpc_input_error(local, skb); rxrpc_free_skb(skb, rxrpc_skb_put_error_report); break; + break; default: WARN_ON_ONCE(1); rxrpc_free_skb(skb, rxrpc_skb_put_unknown); @@ -498,7 +513,8 @@ int rxrpc_io_thread(void *data) set_current_state(TASK_INTERRUPTIBLE); if (!skb_queue_empty(&local->rx_queue) || - !list_empty(&local->call_attend_q)) { + !list_empty(&local->call_attend_q) || + !list_empty(&local->conn_attend_q)) { __set_current_state(TASK_RUNNING); continue; } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index b77e3076f6d9..3ab68ae68954 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -100,6 +100,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, skb_queue_head_init(&local->rx_delay_queue); #endif skb_queue_head_init(&local->rx_queue); + INIT_LIST_HEAD(&local->conn_attend_q); INIT_LIST_HEAD(&local->call_attend_q); local->client_bundles = RB_ROOT; spin_lock_init(&local->client_bundles_lock); From patchwork Tue Dec 6 16:00:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30407 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2906243wrr; Tue, 6 Dec 2022 08:05:26 -0800 (PST) X-Google-Smtp-Source: AA0mqf7uhxtJCmppxM4h7pbxhhQC33rBM+de4rsK5YxsiU6pw1UCho/TLbjSvBTWie+89icS0QaN X-Received: by 2002:a50:ef04:0:b0:46b:a74:b3d5 with SMTP id m4-20020a50ef04000000b0046b0a74b3d5mr37086078eds.155.1670342726067; Tue, 06 Dec 2022 08:05:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342726; cv=none; d=google.com; s=arc-20160816; b=Mp/ZV0mata704GUXuR12sRJn393dOz3b+a/nc0xrQnPy870plDjs42kyAcb54tVgrB IXZ8x8rq5eRCKQ9g/mzr4lQcxpDhCALvZYS0KZ3U9cwDS4HDfDlRtZ1ciI8newVDAGVG 8NDi15QBftwDQ3U/c9q7zYspccQkech4qyl/qPrV5dKzKALrO054g6qDk/DcpHnxV1oH PhC7l4hSANH5hkmsI7rl1FULEyeQNMmPYcn88DQole6JVxTOiwnhsU4bVKbvhu8UXWke AUtLGFJcO0XPlLjSARY4mU49yYN3uW9vRIQgqCyGldkG0veqm9+A+XP5WhR1INHNerEv ksgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=uAXnKCZcPnjLrV3yzaSB5VERF639/c/pXvUEqSnUYbo=; b=cLL9TBnfU+6vaAifHpbTMIXvWT28DMhfGG5uYVv8ZzsPvvkzho5y493/A+IZR1HZLc qXpRZtWUoJ1bc4kXePjYfZ/h82IVz8QBZk3ZYeHf0T3FL/MxcpkEwOxmdYGm7v9lD4bw PR8rfwlbYeJCxNguWU1UdlDflYodq9z0zcFaPywGnJyFOMa8tAD3hJ2zsdrC2f7MaCOU kcRQ7ZnkxZHDtz9F8ErtGoLgBgFLzDrvzD9PutIapjEP3naqrZ7Ud1whWbFZxZEGpiU3 FFvElR5lnijg8Z7M+c4VF/5W9KJ7sUO5Ddo210bR+qrFUzzgnYWuIKFxHY2BEbactiRp 0uqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W8h5I9ce; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l11-20020a170906794b00b007aeed47fca3si2870849ejo.475.2022.12.06.08.04.54; Tue, 06 Dec 2022 08:05:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=W8h5I9ce; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234508AbiLFQCe (ORCPT + 99 others); Tue, 6 Dec 2022 11:02:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233978AbiLFQCA (ORCPT ); Tue, 6 Dec 2022 11:02:00 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B207286F3 for ; Tue, 6 Dec 2022 08:00:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342458; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uAXnKCZcPnjLrV3yzaSB5VERF639/c/pXvUEqSnUYbo=; b=W8h5I9celT9e3XJzc+gq/DrgWUrrWAo5EEDow27lNkS3I+KhY+WLLEck9oaSY+H/KCol14 4k4mPIKr4jfgEYOlCseld0DZicAaWZcZBuQs2/ARHUZ+aacMSligCUVYlStKy1/JG0kNZk BvaIyGsaX8QBsNLA4HNf7gFVaV0sdG8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-671-v1tjU1O1PoW1-L8nli-xbw-1; Tue, 06 Dec 2022 11:00:55 -0500 X-MC-Unique: v1tjU1O1PoW1-L8nli-xbw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE38F3810D36; Tue, 6 Dec 2022 16:00:54 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0088240C6E16; Tue, 6 Dec 2022 16:00:53 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 16/32] rxrpc: Clean up connection abort From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:00:51 +0000 Message-ID: <167034245119.1105287.9100486645726374644.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481294144258865?= X-GMAIL-MSGID: =?utf-8?q?1751481294144258865?= Clean up connection abort, using the connection state_lock to gate access to change that state, and use an rxrpc_call_completion value to indicate the difference between local and remote aborts as these can be pasted directly into the call state. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 2 net/rxrpc/ar-internal.h | 39 ++++--- net/rxrpc/call_object.c | 8 - net/rxrpc/conn_event.c | 233 +++++++++++++++--------------------------- net/rxrpc/insecure.c | 18 +-- net/rxrpc/output.c | 56 ++++++++++ net/rxrpc/proc.c | 10 +- net/rxrpc/rxkad.c | 28 ++--- 8 files changed, 181 insertions(+), 213 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 5a0c44cf1ce6..d9c754f8ef2b 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -118,6 +118,7 @@ EM(rxrpc_conn_get_call_input, "GET inp-call") \ EM(rxrpc_conn_get_conn_input, "GET inp-conn") \ EM(rxrpc_conn_get_idle, "GET idle ") \ + EM(rxrpc_conn_get_poke_abort, "GET pk-abort") \ EM(rxrpc_conn_get_poke_timer, "GET poke ") \ EM(rxrpc_conn_get_service_conn, "GET svc-conn") \ EM(rxrpc_conn_new_client, "NEW client ") \ @@ -135,6 +136,7 @@ EM(rxrpc_conn_put_unidle, "PUT unidle ") \ EM(rxrpc_conn_put_work, "PUT work ") \ EM(rxrpc_conn_queue_challenge, "QUE chall ") \ + EM(rxrpc_conn_queue_retry_work, "QUE retry-wk") \ EM(rxrpc_conn_queue_rx_work, "QUE rx-work ") \ EM(rxrpc_conn_see_new_service_conn, "SEE new-svc ") \ EM(rxrpc_conn_see_reap_service, "SEE reap-svc") \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 70a3fdf62cd1..ab031f899c2d 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -263,13 +263,11 @@ struct rxrpc_security { /* respond to a challenge */ int (*respond_to_challenge)(struct rxrpc_connection *, - struct sk_buff *, - u32 *); + struct sk_buff *); /* verify a response */ int (*verify_response)(struct rxrpc_connection *, - struct sk_buff *, - u32 *); + struct sk_buff *); /* clear connection security */ void (*clear)(struct rxrpc_connection *); @@ -367,6 +365,18 @@ struct rxrpc_conn_parameters { u32 security_level; /* Security level selected */ }; +/* + * Call completion condition (state == RXRPC_CALL_COMPLETE). + */ +enum rxrpc_call_completion { + RXRPC_CALL_SUCCEEDED, /* - Normal termination */ + RXRPC_CALL_REMOTELY_ABORTED, /* - call aborted by peer */ + RXRPC_CALL_LOCALLY_ABORTED, /* - call aborted locally on error or close */ + RXRPC_CALL_LOCAL_ERROR, /* - call failed due to local error */ + RXRPC_CALL_NETWORK_ERROR, /* - call terminated by network error */ + NR__RXRPC_CALL_COMPLETIONS +}; + /* * Bits in the connection flags. */ @@ -391,6 +401,7 @@ enum rxrpc_conn_flag { */ enum rxrpc_conn_event { RXRPC_CONN_EV_CHALLENGE, /* Send challenge packet */ + RXRPC_CONN_EV_ABORT_CALLS, /* Abort attached calls */ }; /* @@ -403,8 +414,7 @@ enum rxrpc_conn_proto_state { RXRPC_CONN_SERVICE_UNSECURED, /* Service unsecured connection */ RXRPC_CONN_SERVICE_CHALLENGING, /* Service challenging for security */ RXRPC_CONN_SERVICE, /* Service secured connection */ - RXRPC_CONN_REMOTELY_ABORTED, /* Conn aborted by peer */ - RXRPC_CONN_LOCALLY_ABORTED, /* Conn aborted locally */ + RXRPC_CONN_ABORTED, /* Conn aborted */ RXRPC_CONN__NR_STATES }; @@ -487,7 +497,8 @@ struct rxrpc_connection { unsigned long idle_timestamp; /* Time at which last became idle */ spinlock_t state_lock; /* state-change lock */ enum rxrpc_conn_proto_state state; /* current state of connection */ - u32 abort_code; /* Abort code of connection abort */ + enum rxrpc_call_completion completion; /* Completion condition */ + s32 abort_code; /* Abort code of connection abort */ int debug_id; /* debug ID for printks */ atomic_t serial; /* packet serial number counter */ unsigned int hi_serial; /* highest serial number received */ @@ -561,18 +572,6 @@ enum rxrpc_call_state { NR__RXRPC_CALL_STATES }; -/* - * Call completion condition (state == RXRPC_CALL_COMPLETE). - */ -enum rxrpc_call_completion { - RXRPC_CALL_SUCCEEDED, /* - Normal termination */ - RXRPC_CALL_REMOTELY_ABORTED, /* - call aborted by peer */ - RXRPC_CALL_LOCALLY_ABORTED, /* - call aborted locally on error or close */ - RXRPC_CALL_LOCAL_ERROR, /* - call failed due to local error */ - RXRPC_CALL_NETWORK_ERROR, /* - call terminated by network error */ - NR__RXRPC_CALL_COMPLETIONS -}; - /* * Call Tx congestion management modes. */ @@ -907,6 +906,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *); */ void rxrpc_conn_retransmit_call(struct rxrpc_connection *, struct sk_buff *, unsigned int); +int rxrpc_abort_conn(struct rxrpc_connection *, struct sk_buff *, s32, int, const char *); void rxrpc_process_connection(struct work_struct *); void rxrpc_process_delayed_final_acks(struct rxrpc_connection *, bool); int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb); @@ -1064,6 +1064,7 @@ static inline struct rxrpc_net *rxrpc_net(struct net *net) int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb); int rxrpc_send_abort_packet(struct rxrpc_call *); int rxrpc_send_data_packet(struct rxrpc_call *, struct rxrpc_txbuf *); +void rxrpc_send_conn_abort(struct rxrpc_connection *); void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb); void rxrpc_send_keepalive(struct rxrpc_peer *); void rxrpc_transmit_one(struct rxrpc_call *call, struct rxrpc_txbuf *txb); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index f31ba1c7d103..924c8dc58ac9 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -442,14 +442,10 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, call->state = RXRPC_CALL_SERVER_RECV_REQUEST; break; - case RXRPC_CONN_REMOTELY_ABORTED: - __rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, + case RXRPC_CONN_ABORTED: + __rxrpc_set_call_completion(call, conn->completion, conn->abort_code, conn->error); break; - case RXRPC_CONN_LOCALLY_ABORTED: - __rxrpc_abort_call("CON", call, 1, - conn->abort_code, conn->error); - break; default: BUG(); } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 488634f36674..93a892471999 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -16,6 +16,60 @@ #include #include "ar-internal.h" +/* + * Set the completion state on an aborted connection. + */ +static bool rxrpc_set_conn_aborted(struct rxrpc_connection *conn, struct sk_buff *skb, + s32 abort_code, int err, + enum rxrpc_call_completion compl) +{ + bool aborted = false; + + if (conn->state != RXRPC_CONN_ABORTED) { + spin_lock(&conn->state_lock); + if (conn->state != RXRPC_CONN_ABORTED) { + conn->abort_code = abort_code; + conn->error = err; + conn->completion = compl; + /* Order the abort info before the state change. */ + smp_store_release(&conn->state, RXRPC_CONN_ABORTED); + set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); + set_bit(RXRPC_CONN_EV_ABORT_CALLS, &conn->events); + aborted = true; + } + spin_unlock(&conn->state_lock); + } + + return aborted; +} + +/* + * Mark a socket buffer to indicate that the connection it's on should be aborted. + */ +int rxrpc_abort_conn(struct rxrpc_connection *conn, struct sk_buff *skb, + s32 abort_code, int err, const char *why) +{ + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + + if (rxrpc_set_conn_aborted(conn, skb, abort_code, err, + RXRPC_CALL_LOCALLY_ABORTED)) { + trace_rxrpc_abort(0, why, sp->hdr.cid, sp->hdr.callNumber, + sp->hdr.seq, abort_code, err); + rxrpc_poke_conn(conn, rxrpc_conn_get_poke_abort); + } + return -EPROTO; +} + +/* + * Mark a connection as being remotely aborted. + */ +static bool rxrpc_input_conn_abort(struct rxrpc_connection *conn, + struct sk_buff *skb) +{ + return rxrpc_set_conn_aborted(conn, skb, skb->priority, -ECONNABORTED, + RXRPC_CALL_REMOTELY_ABORTED); +} + /* * Retransmit terminal ACK or ABORT of the previous call. */ @@ -146,9 +200,7 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, /* * pass a connection-level abort onto all calls on that connection */ -static void rxrpc_abort_calls(struct rxrpc_connection *conn, - enum rxrpc_call_completion compl, - rxrpc_serial_t serial) +static void rxrpc_abort_calls(struct rxrpc_connection *conn) { struct rxrpc_call *call; int i; @@ -161,102 +213,17 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn, call = rcu_dereference_protected( conn->channels[i].call, lockdep_is_held(&conn->bundle->channel_lock)); - if (call) { - if (compl == RXRPC_CALL_LOCALLY_ABORTED) - trace_rxrpc_abort(call->debug_id, - "CON", call->cid, - call->call_id, 0, - conn->abort_code, - conn->error); - else - trace_rxrpc_rx_abort(call, serial, - conn->abort_code); - rxrpc_set_call_completion(call, compl, + if (call) + rxrpc_set_call_completion(call, + conn->completion, conn->abort_code, conn->error); - } } spin_unlock(&conn->bundle->channel_lock); _leave(""); } -/* - * generate a connection-level abort - */ -static int rxrpc_abort_connection(struct rxrpc_connection *conn, - int error, u32 abort_code) -{ - struct rxrpc_wire_header whdr; - struct msghdr msg; - struct kvec iov[2]; - __be32 word; - size_t len; - u32 serial; - int ret; - - _enter("%d,,%u,%u", conn->debug_id, error, abort_code); - - /* generate a connection-level abort */ - spin_lock(&conn->state_lock); - if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { - spin_unlock(&conn->state_lock); - _leave(" = 0 [already dead]"); - return 0; - } - - conn->error = error; - conn->abort_code = abort_code; - conn->state = RXRPC_CONN_LOCALLY_ABORTED; - set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); - spin_unlock(&conn->state_lock); - - msg.msg_name = &conn->peer->srx.transport; - msg.msg_namelen = conn->peer->srx.transport_len; - msg.msg_control = NULL; - msg.msg_controllen = 0; - msg.msg_flags = 0; - - whdr.epoch = htonl(conn->proto.epoch); - whdr.cid = htonl(conn->proto.cid); - whdr.callNumber = 0; - whdr.seq = 0; - whdr.type = RXRPC_PACKET_TYPE_ABORT; - whdr.flags = conn->out_clientflag; - whdr.userStatus = 0; - whdr.securityIndex = conn->security_ix; - whdr._rsvd = 0; - whdr.serviceId = htons(conn->service_id); - - word = htonl(conn->abort_code); - - iov[0].iov_base = &whdr; - iov[0].iov_len = sizeof(whdr); - iov[1].iov_base = &word; - iov[1].iov_len = sizeof(word); - - len = iov[0].iov_len + iov[1].iov_len; - - serial = atomic_inc_return(&conn->serial); - rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial); - whdr.serial = htonl(serial); - - ret = kernel_sendmsg(conn->local->socket, &msg, iov, 2, len); - if (ret < 0) { - trace_rxrpc_tx_fail(conn->debug_id, serial, ret, - rxrpc_tx_point_conn_abort); - _debug("sendmsg failed: %d", ret); - return -EAGAIN; - } - - trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_conn_abort); - - conn->peer->last_tx_at = ktime_get_seconds(); - - _leave(" = 0"); - return 0; -} - /* * mark a call as being on a now-secured channel * - must be called with BH's disabled. @@ -278,26 +245,22 @@ static void rxrpc_call_is_secure(struct rxrpc_call *call) * connection-level Rx packet processor */ static int rxrpc_process_event(struct rxrpc_connection *conn, - struct sk_buff *skb, - u32 *_abort_code) + struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); int loop, ret; - if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { - _leave(" = -ECONNABORTED [%u]", conn->state); + if (conn->state == RXRPC_CONN_ABORTED) return -ECONNABORTED; - } _enter("{%d},{%u,%%%u},", conn->debug_id, sp->hdr.type, sp->hdr.serial); switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_CHALLENGE: - return conn->security->respond_to_challenge(conn, skb, - _abort_code); + return conn->security->respond_to_challenge(conn, skb); case RXRPC_PACKET_TYPE_RESPONSE: - ret = conn->security->verify_response(conn, skb, _abort_code); + ret = conn->security->verify_response(conn, skb); if (ret < 0) return ret; @@ -336,26 +299,8 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, */ static void rxrpc_secure_connection(struct rxrpc_connection *conn) { - u32 abort_code; - int ret; - - _enter("{%d}", conn->debug_id); - - ASSERT(conn->security_ix != 0); - - if (conn->security->issue_challenge(conn) < 0) { - abort_code = RX_CALL_DEAD; - ret = -ENOMEM; - goto abort; - } - - _leave(""); - return; - -abort: - _debug("abort %d, %d", ret, abort_code); - rxrpc_abort_connection(conn, ret, abort_code); - _leave(" [aborted]"); + if (conn->security->issue_challenge(conn) < 0) + rxrpc_abort_conn(conn, NULL, RX_CALL_DEAD, -ENOMEM, "OOM"); } /* @@ -406,7 +351,6 @@ void rxrpc_process_delayed_final_acks(struct rxrpc_connection *conn, bool force) static void rxrpc_do_process_connection(struct rxrpc_connection *conn) { struct sk_buff *skb; - u32 abort_code = RX_PROTOCOL_ERROR; int ret; if (test_and_clear_bit(RXRPC_CONN_EV_CHALLENGE, &conn->events)) @@ -416,33 +360,18 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) * connection that each one has when we've finished with it */ while ((skb = skb_dequeue(&conn->rx_queue))) { rxrpc_see_skb(skb, rxrpc_skb_see_conn_work); - ret = rxrpc_process_event(conn, skb, &abort_code); + ret = rxrpc_process_event(conn, skb); switch (ret) { - case -EPROTO: - case -EKEYEXPIRED: - case -EKEYREJECTED: - goto protocol_error; case -ENOMEM: case -EAGAIN: - goto requeue_and_leave; - case -ECONNABORTED: + skb_queue_head(&conn->rx_queue, skb); + rxrpc_queue_conn(conn, rxrpc_conn_queue_retry_work); + break; default: rxrpc_free_skb(skb, rxrpc_skb_put_conn_work); break; } } - - return; - -requeue_and_leave: - skb_queue_head(&conn->rx_queue, skb); - return; - -protocol_error: - if (rxrpc_abort_connection(conn, ret, abort_code) < 0) - goto requeue_and_leave; - rxrpc_free_skb(skb, rxrpc_skb_put_conn_work); - return; } void rxrpc_process_connection(struct work_struct *work) @@ -480,28 +409,25 @@ int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { - _leave(" = -ECONNABORTED [%u]", conn->state); - return 0; - } - - _enter("{%d},{%u,%%%u},", conn->debug_id, sp->hdr.type, sp->hdr.serial); - switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_BUSY: /* Just ignore BUSY packets for now. */ return 0; case RXRPC_PACKET_TYPE_ABORT: - conn->error = -ECONNABORTED; - conn->abort_code = skb->priority; - conn->state = RXRPC_CONN_REMOTELY_ABORTED; - set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); - rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial); - return 0; + if (smp_load_acquire(&conn->state) == RXRPC_CONN_ABORTED) + return true; + rxrpc_input_conn_abort(conn, skb); + rxrpc_abort_calls(conn); + return true; case RXRPC_PACKET_TYPE_CHALLENGE: case RXRPC_PACKET_TYPE_RESPONSE: + if (smp_load_acquire(&conn->state) == RXRPC_CONN_ABORTED) { + if (conn->completion == RXRPC_CALL_LOCALLY_ABORTED) + rxrpc_send_conn_abort(conn); + return true; + } rxrpc_post_packet_to_conn(conn, skb); return 0; @@ -517,6 +443,9 @@ int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) */ void rxrpc_input_conn_event(struct rxrpc_connection *conn, struct sk_buff *skb) { + if (test_and_clear_bit(RXRPC_CONN_EV_ABORT_CALLS, &conn->events)) + rxrpc_abort_calls(conn); + /* Process delayed ACKs whose time has come. */ if (conn->flags & RXRPC_CONN_FINAL_ACK_MASK) rxrpc_process_delayed_final_acks(conn, false); diff --git a/net/rxrpc/insecure.c b/net/rxrpc/insecure.c index 0eb8471bfc53..29dcc7d3f51a 100644 --- a/net/rxrpc/insecure.c +++ b/net/rxrpc/insecure.c @@ -43,25 +43,15 @@ static void none_free_call_crypto(struct rxrpc_call *call) } static int none_respond_to_challenge(struct rxrpc_connection *conn, - struct sk_buff *skb, - u32 *_abort_code) + struct sk_buff *skb) { - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, - tracepoint_string("chall_none")); - return -EPROTO; + return rxrpc_abort_conn(conn, skb, RX_PROTOCOL_ERROR, -EPROTO, "RXN"); } static int none_verify_response(struct rxrpc_connection *conn, - struct sk_buff *skb, - u32 *_abort_code) + struct sk_buff *skb) { - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, - tracepoint_string("resp_none")); - return -EPROTO; + return rxrpc_abort_conn(conn, skb, RX_PROTOCOL_ERROR, -EPROTO, "RXN"); } static void none_clear(struct rxrpc_connection *conn) diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 4c2c4b9828b8..4244fbf87fe6 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -517,6 +517,62 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) goto done; } +/* + * Transmit a connection-level abort. + */ +void rxrpc_send_conn_abort(struct rxrpc_connection *conn) +{ + struct rxrpc_wire_header whdr; + struct msghdr msg; + struct kvec iov[2]; + __be32 word; + size_t len; + u32 serial; + int ret; + + msg.msg_name = &conn->peer->srx.transport; + msg.msg_namelen = conn->peer->srx.transport_len; + msg.msg_control = NULL; + msg.msg_controllen = 0; + msg.msg_flags = 0; + + whdr.epoch = htonl(conn->proto.epoch); + whdr.cid = htonl(conn->proto.cid); + whdr.callNumber = 0; + whdr.seq = 0; + whdr.type = RXRPC_PACKET_TYPE_ABORT; + whdr.flags = conn->out_clientflag; + whdr.userStatus = 0; + whdr.securityIndex = conn->security_ix; + whdr._rsvd = 0; + whdr.serviceId = htons(conn->service_id); + + word = htonl(conn->abort_code); + + iov[0].iov_base = &whdr; + iov[0].iov_len = sizeof(whdr); + iov[1].iov_base = &word; + iov[1].iov_len = sizeof(word); + + len = iov[0].iov_len + iov[1].iov_len; + + serial = atomic_inc_return(&conn->serial); + whdr.serial = htonl(serial); + + iov_iter_kvec(&msg.msg_iter, WRITE, iov, 2, len); + ret = do_udp_sendmsg(conn->local->socket, &msg, len); + if (ret < 0) { + trace_rxrpc_tx_fail(conn->debug_id, serial, ret, + rxrpc_tx_point_conn_abort); + _debug("sendmsg failed: %d", ret); + return; + } + + trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_conn_abort); + + conn->peer->last_tx_at = ktime_get_seconds(); +} + /* * Reject a packet through the local endpoint. */ diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index 38ea7df3fc42..5c56bcab578c 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -17,8 +17,7 @@ static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = { [RXRPC_CONN_SERVICE_UNSECURED] = "SvUnsec ", [RXRPC_CONN_SERVICE_CHALLENGING] = "SvChall ", [RXRPC_CONN_SERVICE] = "SvSecure", - [RXRPC_CONN_REMOTELY_ABORTED] = "RmtAbort", - [RXRPC_CONN_LOCALLY_ABORTED] = "LocAbort", + [RXRPC_CONN_ABORTED] = "Aborted ", }; /* @@ -141,6 +140,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) { struct rxrpc_connection *conn; struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); + const char *state; char lbuff[50], rbuff[50]; if (v == &rxnet->conn_proc_list) { @@ -161,9 +161,11 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) } sprintf(lbuff, "%pISpc", &conn->local->srx.transport); - sprintf(rbuff, "%pISpc", &conn->peer->srx.transport); print: + state = smp_load_acquire(&conn->state) == RXRPC_CONN_ABORTED ? + rxrpc_call_completions[conn->completion] : + rxrpc_conn_states[conn->state]; seq_printf(seq, "UDP %-47.47s %-47.47s %4x %08x %s %3u %3d" " %s %08x %08x %08x %08x %08x %08x %08x\n", @@ -174,7 +176,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) rxrpc_conn_is_service(conn) ? "Svc" : "Clt", refcount_read(&conn->ref), atomic_read(&conn->active), - rxrpc_conn_states[conn->state], + state, key_serial(conn->key), atomic_read(&conn->serial), conn->hi_serial, diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index d1233720e05f..5d2fbc6ec3cf 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -821,8 +821,7 @@ static int rxkad_encrypt_response(struct rxrpc_connection *conn, * respond to a challenge packet */ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, - struct sk_buff *skb, - u32 *_abort_code) + struct sk_buff *skb) { const struct rxrpc_key_token *token; struct rxkad_challenge challenge; @@ -898,7 +897,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto); ret = -EPROTO; other_error: - *_abort_code = abort_code; + rxrpc_abort_conn(conn, skb, abort_code, ret, "RXK"); return ret; } @@ -910,8 +909,7 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, struct sk_buff *skb, void *ticket, size_t ticket_len, struct rxrpc_crypt *_session_key, - time64_t *_expiry, - u32 *_abort_code) + time64_t *_expiry) { struct skcipher_request *req; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); @@ -1042,8 +1040,7 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, abort_code = RXKADBADTICKET; ret = -EPROTO; other_error: - *_abort_code = abort_code; - return ret; + return rxrpc_abort_conn(conn, skb, abort_code, ret, "RXK"); temporary_error: return ret; } @@ -1086,8 +1083,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn, * verify a response */ static int rxkad_verify_response(struct rxrpc_connection *conn, - struct sk_buff *skb, - u32 *_abort_code) + struct sk_buff *skb) { struct rxkad_response *response; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); @@ -1115,11 +1111,8 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, abort_code = RXKADNOAUTH; break; } - trace_rxrpc_abort(0, "SVK", - sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - abort_code, PTR_ERR(server_key)); - *_abort_code = abort_code; - return -EPROTO; + return rxrpc_abort_conn(conn, skb, abort_code, + PTR_ERR(server_key), "RXK"); } ret = -ENOMEM; @@ -1168,7 +1161,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, goto temporary_error_free_ticket; ret = rxkad_decrypt_ticket(conn, server_key, skb, ticket, ticket_len, - &session_key, &expiry, _abort_code); + &session_key, &expiry); if (ret < 0) goto temporary_error_free_ticket; @@ -1246,10 +1239,9 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, kfree(ticket); protocol_error: kfree(response); - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto); key_put(server_key); - *_abort_code = abort_code; - return -EPROTO; + trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto); + return rxrpc_abort_conn(conn, skb, abort_code, -EPROTO, "RXK"); temporary_error_free_ticket: kfree(ticket); From patchwork Tue Dec 6 16:01:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30408 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2906316wrr; Tue, 6 Dec 2022 08:05:31 -0800 (PST) X-Google-Smtp-Source: AA0mqf48tmPS5Od9n5NKK7qAImW/sGjqukE3+7cD4zgrcqEcyZcsMd3cQmnsdDNhkinecw6Kzqyu X-Received: by 2002:a17:906:398b:b0:7ad:b868:f096 with SMTP id h11-20020a170906398b00b007adb868f096mr73789013eje.295.1670342731775; Tue, 06 Dec 2022 08:05:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342731; cv=none; d=google.com; s=arc-20160816; b=evcCfV01FrFhrzOOFc6G+mka6XHOSos0DLdo0+T1mxaLsjLHtYLhK/Mh+HRjz0AxiO Hciwbk74DwGvHWq0imEU4b2Q6ayTS4YyxM89Wj7cry+k4MrJOftPqfGXv9sqHuSLizPN n1TBSk6rP8csG9d9aVN2TrU3Qhxx27nPdQ1s+IkJ3DYPjvl59uSpFldMlZsoi767ANii mYZGzuK/PRpVO+W3s+JQ3IEb8N1eXTLyezYtGqmNRLJETU535RZKXPDQaeYQGFslCF3U 5dbx8i/5xM2LdR8dCecxLWZA9toS2hMNxSsSNH4fQh8yz1wtUVYSzbUBTzov34tjUXPF 4A2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=1kRLmKgK6yTanNGgmOMXwBGuf2jqDuVfaU4sS1IhfdM=; b=bAH+BWAys9lA063LXpMVCBFANG5MnBlUwhoa58lCoPRrv/2qOAc0TjqOxJ+zNLLc5h wOE62YMZB2z9K7A6pWwpVekFZtGrlO8HP3GrmnYIykuRlhk/BFbYR+SjIoBsEfLPwkhA 4EpZM5ZhFOhvRni0vBt2e8jfXLImovCDrlHjAo0dQO7dTGxmZ72SJtGKYDcrxGVCrNvB dptHsO0iAUcgEPWm5DF/LxSTG09QtYRurOPH+2zndM14XWLtSRRcAocfgF1qhacuNZXo QC+AiWaeP0iD1oPeXlnFZFr0UWCo0t8xjXeAMMsU6tfuEdZ2aECjJLdj2kOsbwy/csTa yc7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=chVHTxXP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y3-20020a056402440300b004604906b23csi2826002eda.545.2022.12.06.08.04.59; Tue, 06 Dec 2022 08:05:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=chVHTxXP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235150AbiLFQDr (ORCPT + 99 others); Tue, 6 Dec 2022 11:03:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235295AbiLFQCM (ORCPT ); Tue, 6 Dec 2022 11:02:12 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECF3E27CD8 for ; Tue, 6 Dec 2022 08:01:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342466; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1kRLmKgK6yTanNGgmOMXwBGuf2jqDuVfaU4sS1IhfdM=; b=chVHTxXPBowykGvYOE+hAzpm3ftEiqWrxEgNbWDOG/uSINOxeQNi9MJN9wsnQEYvnSLI6K EyEAa/h4zQs4WNN04eeBjpzlb0h7hDG9cGaQGz2A0lbNKZju/zaAipG571Ud/ZtEnAskVx RPXISi+muw5XhmjgXDQ2UywIwddux1w= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-118-d2V8GHtSNk248ZDT1MmmMA-1; Tue, 06 Dec 2022 11:01:04 -0500 X-MC-Unique: d2V8GHtSNk248ZDT1MmmMA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D16341C008A6; Tue, 6 Dec 2022 16:01:03 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id C60321759E; Tue, 6 Dec 2022 16:01:02 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 17/32] rxrpc: Tidy up abort generation infrastructure From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:00 +0000 Message-ID: <167034245999.1105287.14721103400935951983.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481299904816768?= X-GMAIL-MSGID: =?utf-8?q?1751481299904816768?= Tidy up the abort generation infrastructure in the following ways: (1) Create an enum and string mapping table to list the reasons an abort might be generated in tracing. (2) Replace the 3-char string with the values from (1) in the places that use that to log the abort source. This gets rid of a memcpy() in the tracepoint. (3) Subsume the rxrpc_rx_eproto tracepoint with the rxrpc_abort tracepoint and use values from (1) to indicate the trace reason. (4) Always make a call to an abort function at the point of the abort rather than stashing the values into variables and using goto to get to a place where it reported. The C optimiser will collapse the calls together as appropriate. The abort functions return a value that can be returned directly if appropriate. Note that this extends into afs also at the points where that generates an abort. To aid with this, the afs sources need to #define RXRPC_TRACE_ONLY_DEFINE_ENUMS before including the rxrpc tracing header because they don't have access to the rxrpc internal structures that some of the tracepoints make use of. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/cmservice.c | 6 + fs/afs/rxrpc.c | 23 ++- include/net/af_rxrpc.h | 3 include/trace/events/rxrpc.h | 140 +++++++++++++++--- net/rxrpc/ar-internal.h | 39 +++-- net/rxrpc/call_accept.c | 27 ++-- net/rxrpc/call_event.c | 13 +- net/rxrpc/call_object.c | 6 + net/rxrpc/conn_event.c | 19 +- net/rxrpc/input.c | 65 ++++----- net/rxrpc/insecure.c | 6 + net/rxrpc/io_thread.c | 165 ++++++++++------------ net/rxrpc/recvmsg.c | 20 ++- net/rxrpc/rxkad.c | 321 +++++++++++++++++------------------------- net/rxrpc/rxperf.c | 17 ++ net/rxrpc/security.c | 14 +- net/rxrpc/sendmsg.c | 20 ++- 17 files changed, 470 insertions(+), 434 deletions(-) diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c index 0a090d614e76..6fcc09201697 100644 --- a/fs/afs/cmservice.c +++ b/fs/afs/cmservice.c @@ -13,6 +13,8 @@ #include "internal.h" #include "afs_cm.h" #include "protocol_yfs.h" +#define RXRPC_TRACE_ONLY_DEFINE_ENUMS +#include static int afs_deliver_cb_init_call_back_state(struct afs_call *); static int afs_deliver_cb_init_call_back_state3(struct afs_call *); @@ -191,7 +193,7 @@ static void afs_cm_destructor(struct afs_call *call) * Abort a service call from within an action function. */ static void afs_abort_service_call(struct afs_call *call, u32 abort_code, int error, - const char *why) + enum rxrpc_abort_reason why) { rxrpc_kernel_abort_call(call->net->socket, call->rxcall, abort_code, error, why); @@ -469,7 +471,7 @@ static void SRXAFSCB_ProbeUuid(struct work_struct *work) if (memcmp(r, &call->net->uuid, sizeof(call->net->uuid)) == 0) afs_send_empty_reply(call); else - afs_abort_service_call(call, 1, 1, "K-1"); + afs_abort_service_call(call, 1, 1, afs_abort_probeuuid_negative); afs_put_call(call); _leave(""); diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index eccc3cd0cb70..cfff7f9ccc33 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -13,6 +13,8 @@ #include "internal.h" #include "afs_cm.h" #include "protocol_yfs.h" +#define RXRPC_TRACE_ONLY_DEFINE_ENUMS +#include struct workqueue_struct *afs_async_calls; @@ -397,7 +399,8 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp) error_do_abort: if (ret != -ECONNABORTED) { rxrpc_kernel_abort_call(call->net->socket, rxcall, - RX_USER_ABORT, ret, "KSD"); + RX_USER_ABORT, ret, + afs_abort_send_data_error); } else { len = 0; iov_iter_kvec(&msg.msg_iter, READ, NULL, 0, 0); @@ -527,7 +530,8 @@ static void afs_deliver_to_call(struct afs_call *call) case -ENOTSUPP: abort_code = RXGEN_OPCODE; rxrpc_kernel_abort_call(call->net->socket, call->rxcall, - abort_code, ret, "KIV"); + abort_code, ret, + afs_abort_op_not_supported); goto local_abort; case -EIO: pr_err("kAFS: Call %u in bad state %u\n", @@ -542,12 +546,14 @@ static void afs_deliver_to_call(struct afs_call *call) if (state != AFS_CALL_CL_AWAIT_REPLY) abort_code = RXGEN_SS_UNMARSHAL; rxrpc_kernel_abort_call(call->net->socket, call->rxcall, - abort_code, ret, "KUM"); + abort_code, ret, + afs_abort_unmarshal_error); goto local_abort; default: abort_code = RX_CALL_DEAD; rxrpc_kernel_abort_call(call->net->socket, call->rxcall, - abort_code, ret, "KER"); + abort_code, ret, + afs_abort_general_error); goto local_abort; } } @@ -619,7 +625,8 @@ long afs_wait_for_call_to_complete(struct afs_call *call, /* Kill off the call if it's still live. */ _debug("call interrupted"); if (rxrpc_kernel_abort_call(call->net->socket, call->rxcall, - RX_USER_ABORT, -EINTR, "KWI")) + RX_USER_ABORT, -EINTR, + afs_abort_interrupted)) afs_set_call_complete(call, -EINTR, 0); } } @@ -836,7 +843,8 @@ void afs_send_empty_reply(struct afs_call *call) case -ENOMEM: _debug("oom"); rxrpc_kernel_abort_call(net->socket, call->rxcall, - RXGEN_SS_MARSHAL, -ENOMEM, "KOO"); + RXGEN_SS_MARSHAL, -ENOMEM, + afs_abort_oom); fallthrough; default: _leave(" [error]"); @@ -878,7 +886,8 @@ void afs_send_simple_reply(struct afs_call *call, const void *buf, size_t len) if (n == -ENOMEM) { _debug("oom"); rxrpc_kernel_abort_call(net->socket, call->rxcall, - RXGEN_SS_MARSHAL, -ENOMEM, "KOO"); + RXGEN_SS_MARSHAL, -ENOMEM, + afs_abort_oom); } _leave(" [error]"); } diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h index d5a5ae926380..ba717eac0229 100644 --- a/include/net/af_rxrpc.h +++ b/include/net/af_rxrpc.h @@ -15,6 +15,7 @@ struct key; struct sock; struct socket; struct rxrpc_call; +enum rxrpc_abort_reason; enum rxrpc_interruptibility { RXRPC_INTERRUPTIBLE, /* Call is interruptible */ @@ -55,7 +56,7 @@ int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *, int rxrpc_kernel_recv_data(struct socket *, struct rxrpc_call *, struct iov_iter *, size_t *, bool, u32 *, u16 *); bool rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *, - u32, int, const char *); + u32, int, enum rxrpc_abort_reason); void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *); void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *, struct sockaddr_rxrpc *); diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index d9c754f8ef2b..76d0bdc3e170 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -23,6 +23,104 @@ EM(rxrpc_sack_none, "---") \ E_(rxrpc_sack_oos, "OOS") +#define rxrpc_abort_reasons \ + /* AFS errors */ \ + EM(afs_abort_general_error, "afs-error") \ + EM(afs_abort_interrupted, "afs-intr") \ + EM(afs_abort_oom, "afs-oom") \ + EM(afs_abort_op_not_supported, "afs-op-notsupp") \ + EM(afs_abort_probeuuid_negative, "afs-probeuuid-neg") \ + EM(afs_abort_send_data_error, "afs-send-data") \ + EM(afs_abort_unmarshal_error, "afs-unmarshal") \ + /* rxperf errors */ \ + EM(rxperf_abort_general_error, "rxperf-error") \ + EM(rxperf_abort_oom, "rxperf-oom") \ + EM(rxperf_abort_op_not_supported, "rxperf-op-notsupp") \ + EM(rxperf_abort_unmarshal_error, "rxperf-unmarshal") \ + /* RxKAD security errors */ \ + EM(rxkad_abort_1_short_check, "rxkad1-short-check") \ + EM(rxkad_abort_1_short_data, "rxkad1-short-data") \ + EM(rxkad_abort_1_short_encdata, "rxkad1-short-encdata") \ + EM(rxkad_abort_1_short_header, "rxkad1-short-hdr") \ + EM(rxkad_abort_2_short_check, "rxkad2-short-check") \ + EM(rxkad_abort_2_short_data, "rxkad2-short-data") \ + EM(rxkad_abort_2_short_header, "rxkad2-short-hdr") \ + EM(rxkad_abort_2_short_len, "rxkad2-short-len") \ + EM(rxkad_abort_bad_checksum, "rxkad2-bad-cksum") \ + EM(rxkad_abort_chall_key_expired, "rxkad-chall-key-exp") \ + EM(rxkad_abort_chall_level, "rxkad-chall-level") \ + EM(rxkad_abort_chall_no_key, "rxkad-chall-nokey") \ + EM(rxkad_abort_chall_short, "rxkad-chall-short") \ + EM(rxkad_abort_chall_version, "rxkad-chall-version") \ + EM(rxkad_abort_resp_bad_callid, "rxkad-resp-bad-callid") \ + EM(rxkad_abort_resp_bad_checksum, "rxkad-resp-bad-cksum") \ + EM(rxkad_abort_resp_bad_param, "rxkad-resp-bad-param") \ + EM(rxkad_abort_resp_call_ctr, "rxkad-resp-call-ctr") \ + EM(rxkad_abort_resp_call_state, "rxkad-resp-call-state") \ + EM(rxkad_abort_resp_key_expired, "rxkad-resp-key-exp") \ + EM(rxkad_abort_resp_key_rejected, "rxkad-resp-key-rej") \ + EM(rxkad_abort_resp_level, "rxkad-resp-level") \ + EM(rxkad_abort_resp_nokey, "rxkad-resp-nokey") \ + EM(rxkad_abort_resp_ooseq, "rxkad-resp-ooseq") \ + EM(rxkad_abort_resp_short, "rxkad-resp-short") \ + EM(rxkad_abort_resp_short_tkt, "rxkad-resp-short-tkt") \ + EM(rxkad_abort_resp_tkt_aname, "rxkad-resp-tk-aname") \ + EM(rxkad_abort_resp_tkt_expired, "rxkad-resp-tk-exp") \ + EM(rxkad_abort_resp_tkt_future, "rxkad-resp-tk-future") \ + EM(rxkad_abort_resp_tkt_inst, "rxkad-resp-tk-inst") \ + EM(rxkad_abort_resp_tkt_len, "rxkad-resp-tk-len") \ + EM(rxkad_abort_resp_tkt_realm, "rxkad-resp-tk-realm") \ + EM(rxkad_abort_resp_tkt_short, "rxkad-resp-tk-short") \ + EM(rxkad_abort_resp_tkt_sinst, "rxkad-resp-tk-sinst") \ + EM(rxkad_abort_resp_tkt_sname, "rxkad-resp-tk-sname") \ + EM(rxkad_abort_resp_unknown_tkt, "rxkad-resp-unknown-tkt") \ + EM(rxkad_abort_resp_version, "rxkad-resp-version") \ + /* rxrpc errors */ \ + EM(rxrpc_abort_call_improper_term, "call-improper-term") \ + EM(rxrpc_abort_call_reset, "call-reset") \ + EM(rxrpc_abort_call_sendmsg, "call-sendmsg") \ + EM(rxrpc_abort_call_sock_release, "call-sock-rel") \ + EM(rxrpc_abort_call_sock_release_tba, "call-sock-rel-tba") \ + EM(rxrpc_abort_call_timeout, "call-timeout") \ + EM(rxrpc_abort_no_service_key, "no-serv-key") \ + EM(rxrpc_abort_nomem, "nomem") \ + EM(rxrpc_abort_service_not_offered, "serv-not-offered") \ + EM(rxrpc_abort_shut_down, "shut-down") \ + EM(rxrpc_abort_unsupported_security, "unsup-sec") \ + EM(rxrpc_badmsg_bad_abort, "bad-abort") \ + EM(rxrpc_badmsg_bad_jumbo, "bad-jumbo") \ + EM(rxrpc_badmsg_short_ack, "short-ack") \ + EM(rxrpc_badmsg_short_ack_info, "short-ack-info") \ + EM(rxrpc_badmsg_short_hdr, "short-hdr") \ + EM(rxrpc_badmsg_unsupported_packet, "unsup-pkt") \ + EM(rxrpc_badmsg_zero_call, "zero-call") \ + EM(rxrpc_badmsg_zero_seq, "zero-seq") \ + EM(rxrpc_badmsg_zero_service, "zero-service") \ + EM(rxrpc_eproto_ackr_outside_window, "ackr-out-win") \ + EM(rxrpc_eproto_ackr_sack_overflow, "ackr-sack-over") \ + EM(rxrpc_eproto_ackr_short_sack, "ackr-short-sack") \ + EM(rxrpc_eproto_ackr_zero, "ackr-zero") \ + EM(rxrpc_eproto_bad_upgrade, "bad-upgrade") \ + EM(rxrpc_eproto_data_after_last, "data-after-last") \ + EM(rxrpc_eproto_different_last, "diff-last") \ + EM(rxrpc_eproto_early_reply, "early-reply") \ + EM(rxrpc_eproto_improper_term, "improper-term") \ + EM(rxrpc_eproto_no_client_call, "no-cl-call") \ + EM(rxrpc_eproto_no_client_conn, "no-cl-conn") \ + EM(rxrpc_eproto_no_service_call, "no-sv-call") \ + EM(rxrpc_eproto_reupgrade, "re-upgrade") \ + EM(rxrpc_eproto_rxnull_challenge, "rxnull-chall") \ + EM(rxrpc_eproto_rxnull_response, "rxnull-resp") \ + EM(rxrpc_eproto_tx_rot_last, "tx-rot-last") \ + EM(rxrpc_eproto_unexpected_ack, "unex-ack") \ + EM(rxrpc_eproto_unexpected_ackall, "unex-ackall") \ + EM(rxrpc_eproto_unexpected_implicit_end, "unex-impl-end") \ + EM(rxrpc_eproto_unexpected_reply, "unex-reply") \ + EM(rxrpc_eproto_wrong_security, "wrong-sec") \ + EM(rxrpc_recvmsg_excess_data, "recvmsg-excess") \ + EM(rxrpc_recvmsg_short_data, "recvmsg-short") \ + E_(rxrpc_sendmsg_late_send, "sendmsg-late") + #define rxrpc_call_poke_traces \ EM(rxrpc_call_poke_abort, "Abort") \ EM(rxrpc_call_poke_complete, "Compl") \ @@ -387,6 +485,7 @@ #define EM(a, b) a, #define E_(a, b) a +enum rxrpc_abort_reason { rxrpc_abort_reasons } __mode(byte); enum rxrpc_bundle_trace { rxrpc_bundle_traces } __mode(byte); enum rxrpc_call_poke_trace { rxrpc_call_poke_traces } __mode(byte); enum rxrpc_call_trace { rxrpc_call_traces } __mode(byte); @@ -416,9 +515,13 @@ enum rxrpc_txqueue_trace { rxrpc_txqueue_traces } __mode(byte); */ #undef EM #undef E_ + +#ifndef RXRPC_TRACE_ONLY_DEFINE_ENUMS + #define EM(a, b) TRACE_DEFINE_ENUM(a); #define E_(a, b) TRACE_DEFINE_ENUM(a); +rxrpc_abort_reasons; rxrpc_bundle_traces; rxrpc_call_poke_traces; rxrpc_call_traces; @@ -670,14 +773,14 @@ TRACE_EVENT(rxrpc_rx_done, ); TRACE_EVENT(rxrpc_abort, - TP_PROTO(unsigned int call_nr, const char *why, u32 cid, u32 call_id, - rxrpc_seq_t seq, int abort_code, int error), + TP_PROTO(unsigned int call_nr, enum rxrpc_abort_reason why, + u32 cid, u32 call_id, rxrpc_seq_t seq, int abort_code, int error), TP_ARGS(call_nr, why, cid, call_id, seq, abort_code, error), TP_STRUCT__entry( __field(unsigned int, call_nr ) - __array(char, why, 4 ) + __field(enum rxrpc_abort_reason, why ) __field(u32, cid ) __field(u32, call_id ) __field(rxrpc_seq_t, seq ) @@ -686,8 +789,8 @@ TRACE_EVENT(rxrpc_abort, ), TP_fast_assign( - memcpy(__entry->why, why, 4); __entry->call_nr = call_nr; + __entry->why = why; __entry->cid = cid; __entry->call_id = call_id; __entry->abort_code = abort_code; @@ -698,7 +801,8 @@ TRACE_EVENT(rxrpc_abort, TP_printk("c=%08x %08x:%08x s=%u a=%d e=%d %s", __entry->call_nr, __entry->cid, __entry->call_id, __entry->seq, - __entry->abort_code, __entry->error, __entry->why) + __entry->abort_code, __entry->error, + __print_symbolic(__entry->why, rxrpc_abort_reasons)) ); TRACE_EVENT(rxrpc_call_complete, @@ -1536,30 +1640,6 @@ TRACE_EVENT(rxrpc_improper_term, __entry->abort_code) ); -TRACE_EVENT(rxrpc_rx_eproto, - TP_PROTO(struct rxrpc_call *call, rxrpc_serial_t serial, - const char *why), - - TP_ARGS(call, serial, why), - - TP_STRUCT__entry( - __field(unsigned int, call ) - __field(rxrpc_serial_t, serial ) - __field(const char *, why ) - ), - - TP_fast_assign( - __entry->call = call ? call->debug_id : 0; - __entry->serial = serial; - __entry->why = why; - ), - - TP_printk("c=%08x EPROTO %08x %s", - __entry->call, - __entry->serial, - __entry->why) - ); - TRACE_EVENT(rxrpc_connect_call, TP_PROTO(struct rxrpc_call *call), @@ -1884,6 +1964,8 @@ TRACE_EVENT(rxrpc_sack, #undef EM #undef E_ + +#endif /* RXRPC_TRACE_ONLY_DEFINE_ENUMS */ #endif /* _TRACE_RXRPC_H */ /* This part must be outside protection */ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index ab031f899c2d..f2ab2d8044bd 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -627,9 +627,10 @@ struct rxrpc_call { unsigned long events; spinlock_t notify_lock; /* Kernel notification lock */ spinlock_t state_lock; /* lock for state transition */ - const char *send_abort_why; /* String indicating why the abort was sent */ + unsigned int send_abort_why; /* Why the abort [enum rxrpc_abort_reason] */ s32 send_abort; /* Abort code to be sent */ short send_abort_err; /* Error to be associated with the abort */ + rxrpc_seq_t send_abort_seq; /* DATA packet that incurred the abort (or 0) */ s32 abort_code; /* Local/remote abort code */ int error; /* Local error incurred */ enum rxrpc_call_state state; /* current state of call */ @@ -842,7 +843,7 @@ void rxrpc_reduce_call_timer(struct rxrpc_call *call, unsigned long now, enum rxrpc_timer_trace why); -void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb); +bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb); /* * call_object.c @@ -904,12 +905,13 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *); /* * conn_event.c */ +int rxrpc_abort_conn(struct rxrpc_connection *, struct sk_buff *, s32, int, + enum rxrpc_abort_reason); void rxrpc_conn_retransmit_call(struct rxrpc_connection *, struct sk_buff *, unsigned int); -int rxrpc_abort_conn(struct rxrpc_connection *, struct sk_buff *, s32, int, const char *); void rxrpc_process_connection(struct work_struct *); void rxrpc_process_delayed_final_acks(struct rxrpc_connection *, bool); -int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb); +bool rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb); void rxrpc_input_conn_event(struct rxrpc_connection *, struct sk_buff *); /* @@ -974,12 +976,18 @@ void rxrpc_implicit_end_call(struct rxrpc_call *, struct sk_buff *); */ int rxrpc_encap_rcv(struct sock *, struct sk_buff *); void rxrpc_error_report(struct sock *); +bool rxrpc_direct_abort(struct sk_buff *, enum rxrpc_abort_reason, s32, int); int rxrpc_io_thread(void *data); static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local) { wake_up_process(local->io_thread); } +static inline bool rxrpc_protocol_error(struct sk_buff *skb, enum rxrpc_abort_reason why) +{ + return rxrpc_direct_abort(skb, why, RX_PROTOCOL_ERROR, -EPROTO); +} + /* * insecure.c */ @@ -1108,29 +1116,24 @@ bool __rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion bool rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); bool __rxrpc_call_completed(struct rxrpc_call *); bool rxrpc_call_completed(struct rxrpc_call *); -bool __rxrpc_abort_call(const char *, struct rxrpc_call *, rxrpc_seq_t, u32, int); -bool rxrpc_abort_call(const char *, struct rxrpc_call *, rxrpc_seq_t, u32, int); +bool __rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); +bool rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int); /* * Abort a call due to a protocol error. */ -static inline bool __rxrpc_abort_eproto(struct rxrpc_call *call, - struct sk_buff *skb, - const char *eproto_why, - const char *why, - u32 abort_code) +static inline int rxrpc_abort_eproto(struct rxrpc_call *call, + struct sk_buff *skb, + s32 abort_code, + enum rxrpc_abort_reason why) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - trace_rxrpc_rx_eproto(call, sp->hdr.serial, eproto_why); - return rxrpc_abort_call(why, call, sp->hdr.seq, abort_code, -EPROTO); + rxrpc_abort_call(call, sp->hdr.seq, abort_code, -EPROTO, why); + return -EPROTO; } -#define rxrpc_abort_eproto(call, skb, eproto_why, abort_why, abort_code) \ - __rxrpc_abort_eproto((call), (skb), tracepoint_string(eproto_why), \ - (abort_why), (abort_code)) - /* * rtt.c */ @@ -1162,7 +1165,7 @@ struct key *rxrpc_look_up_server_security(struct rxrpc_connection *, /* * sendmsg.c */ -bool rxrpc_propose_abort(struct rxrpc_call *, u32, int, const char *); +bool rxrpc_propose_abort(struct rxrpc_call *, s32, int, enum rxrpc_abort_reason); int rxrpc_do_sendmsg(struct rxrpc_sock *, struct msghdr *, size_t); /* diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index d1850863507f..5fa81bb4abfa 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -339,10 +339,9 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local, _enter(""); - /* Don't set up a call for anything other than the first DATA packet. */ - if (sp->hdr.seq != 1 || - sp->hdr.type != RXRPC_PACKET_TYPE_DATA) - return true; /* Just discard */ + /* Don't set up a call for anything other than a DATA packet. */ + if (sp->hdr.type != RXRPC_PACKET_TYPE_DATA) + return rxrpc_protocol_error(skb, rxrpc_eproto_no_service_call); rcu_read_lock(); @@ -363,16 +362,14 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local, if (!conn) { sec = rxrpc_get_incoming_security(rx, skb); if (!sec) - goto reject; + goto unsupported_security; } spin_lock(&rx->incoming_lock); if (rx->sk.sk_state == RXRPC_SERVER_LISTEN_DISABLED || rx->sk.sk_state == RXRPC_CLOSE) { - trace_rxrpc_abort(0, "CLS", sp->hdr.cid, sp->hdr.callNumber, - sp->hdr.seq, RX_INVALID_OPERATION, ESHUTDOWN); - skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; - skb->priority = RX_INVALID_OPERATION; + rxrpc_direct_abort(skb, rxrpc_abort_shut_down, + RX_INVALID_OPERATION, -ESHUTDOWN); goto no_call; } @@ -416,13 +413,15 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local, return true; unsupported_service: - trace_rxrpc_abort(0, "INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_INVALID_OPERATION, EOPNOTSUPP); - skb->priority = RX_INVALID_OPERATION; - goto reject; + rcu_read_unlock(); + return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered, + RX_INVALID_OPERATION, -EOPNOTSUPP); +unsupported_security: + rcu_read_unlock(); + return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered, + RX_INVALID_OPERATION, -EKEYREJECTED); no_call: spin_unlock(&rx->incoming_lock); -reject: rcu_read_unlock(); _leave(" = f [%u]", skb->mark); return false; diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 60bec6feba4a..b41ffb0a69e9 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -331,7 +331,7 @@ static void rxrpc_send_initial_ping(struct rxrpc_call *call) /* * Handle retransmission and deferred ACK/abort generation. */ -void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) +bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) { unsigned long now, next, t; rxrpc_serial_t ackr_serial; @@ -356,8 +356,8 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) abort_code = smp_load_acquire(&call->send_abort); if (abort_code) - rxrpc_abort_call(call->send_abort_why, call, 0, call->send_abort, - call->send_abort_err); + rxrpc_abort_call(call, 0, call->send_abort, call->send_abort_err, + call->send_abort_why); if (skb && skb->mark == RXRPC_SKB_MARK_ERROR) goto out; @@ -442,9 +442,11 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) && (int)call->conn->hi_serial - (int)call->rx_serial > 0) { trace_rxrpc_call_reset(call); - rxrpc_abort_call("EXP", call, 0, RX_CALL_DEAD, -ECONNRESET); + rxrpc_abort_call(call, 0, RX_CALL_DEAD, -ECONNRESET, + rxrpc_abort_call_reset); } else { - rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME); + rxrpc_abort_call(call, 0, RX_CALL_TIMEOUT, -ETIME, + rxrpc_abort_call_timeout); } goto out; } @@ -505,4 +507,5 @@ void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) if (call->acks_hard_ack != call->tx_bottom) rxrpc_shrink_call_tx_buffer(call); _leave(""); + return true; } diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 924c8dc58ac9..cd491fd83c0d 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -578,7 +578,8 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) call = list_entry(rx->to_be_accepted.next, struct rxrpc_call, accept_link); list_del(&call->accept_link); - rxrpc_propose_abort(call, RX_CALL_DEAD, -ECONNRESET, "SKR"); + rxrpc_propose_abort(call, RX_CALL_DEAD, -ECONNRESET, + rxrpc_abort_call_sock_release_tba); rxrpc_put_call(call, rxrpc_call_put_release_sock_tba); } @@ -586,7 +587,8 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) call = list_entry(rx->sock_calls.next, struct rxrpc_call, sock_link); rxrpc_get_call(call, rxrpc_call_get_release_sock); - rxrpc_propose_abort(call, RX_CALL_DEAD, -ECONNRESET, "SKT"); + rxrpc_propose_abort(call, RX_CALL_DEAD, -ECONNRESET, + rxrpc_abort_call_sock_release); rxrpc_release_call(rx, call); rxrpc_put_call(call, rxrpc_call_put_release_sock); } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 93a892471999..21c826161741 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -47,7 +47,7 @@ static bool rxrpc_set_conn_aborted(struct rxrpc_connection *conn, struct sk_buff * Mark a socket buffer to indicate that the connection it's on should be aborted. */ int rxrpc_abort_conn(struct rxrpc_connection *conn, struct sk_buff *skb, - s32 abort_code, int err, const char *why) + s32 abort_code, int err, enum rxrpc_abort_reason why) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); @@ -288,8 +288,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, return 0; default: - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, - tracepoint_string("bad_conn_pkt")); + WARN_ON_ONCE(1); return -EPROTO; } } @@ -300,7 +299,8 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, static void rxrpc_secure_connection(struct rxrpc_connection *conn) { if (conn->security->issue_challenge(conn) < 0) - rxrpc_abort_conn(conn, NULL, RX_CALL_DEAD, -ENOMEM, "OOM"); + rxrpc_abort_conn(conn, NULL, RX_CALL_DEAD, -ENOMEM, + rxrpc_abort_nomem); } /* @@ -405,14 +405,14 @@ static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, /* * Input a connection-level packet. */ -int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) +bool rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_BUSY: /* Just ignore BUSY packets for now. */ - return 0; + return true; case RXRPC_PACKET_TYPE_ABORT: if (smp_load_acquire(&conn->state) == RXRPC_CONN_ABORTED) @@ -429,12 +429,11 @@ int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) return true; } rxrpc_post_packet_to_conn(conn, skb); - return 0; + return true; default: - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, - tracepoint_string("bad_conn_pkt")); - return -EPROTO; + WARN_ON_ONCE(1); + return true; } } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 423b1839c06d..dce440600f1a 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -9,10 +9,10 @@ #include "ar-internal.h" -static void rxrpc_proto_abort(const char *why, - struct rxrpc_call *call, rxrpc_seq_t seq) +static void rxrpc_proto_abort(struct rxrpc_call *call, rxrpc_seq_t seq, + enum rxrpc_abort_reason why) { - rxrpc_abort_call(why, call, seq, RX_PROTOCOL_ERROR, -EBADMSG); + rxrpc_abort_call(call, seq, RX_PROTOCOL_ERROR, -EBADMSG, why); } /* @@ -249,8 +249,8 @@ static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to, * This occurs when we get an ACKALL packet, the first DATA packet of a reply, * or a final ACK packet. */ -static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, - const char *abort_why) +static void rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, + enum rxrpc_abort_reason abort_why) { unsigned int state; @@ -283,13 +283,12 @@ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, else trace_rxrpc_txqueue(call, rxrpc_txqueue_end); _leave(" = ok"); - return true; + return; bad_state: spin_unlock(&call->state_lock); kdebug("end_tx %s", rxrpc_call_states[call->state]); - rxrpc_proto_abort(abort_why, call, call->tx_top); - return false; + rxrpc_proto_abort(call, call->tx_top, abort_why); } /* @@ -311,11 +310,13 @@ static bool rxrpc_receiving_reply(struct rxrpc_call *call) if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags)) { if (!rxrpc_rotate_tx_window(call, top, &summary)) { - rxrpc_proto_abort("TXL", call, top); + rxrpc_proto_abort(call, top, rxrpc_eproto_early_reply); return false; } } - return rxrpc_end_tx_phase(call, true, "ETD"); + + rxrpc_end_tx_phase(call, true, rxrpc_eproto_unexpected_reply); + return true; } static void rxrpc_input_update_ack_window(struct rxrpc_call *call, @@ -366,17 +367,14 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, if (last) { if (test_and_set_bit(RXRPC_CALL_RX_LAST, &call->flags) && - seq + 1 != wtop) { - rxrpc_proto_abort("LSN", call, seq); - return; - } + seq + 1 != wtop) + return rxrpc_proto_abort(call, seq, rxrpc_eproto_different_last); } else { if (test_bit(RXRPC_CALL_RX_LAST, &call->flags) && after_eq(seq, wtop)) { pr_warn("Packet beyond last: c=%x q=%x window=%x-%x wlimit=%x\n", call->debug_id, seq, window, wtop, wlimit); - rxrpc_proto_abort("LSA", call, seq); - return; + return rxrpc_proto_abort(call, seq, rxrpc_eproto_data_after_last); } } @@ -583,7 +581,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) goto out_notify; if (!rxrpc_input_split_jumbo(call, skb)) { - rxrpc_proto_abort("VLD", call, sp->hdr.seq); + rxrpc_proto_abort(call, sp->hdr.seq, rxrpc_badmsg_bad_jumbo); goto out_notify; } skb = NULL; @@ -764,7 +762,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) offset = sizeof(struct rxrpc_wire_header); if (skb_copy_bits(skb, offset, &ack, sizeof(ack)) < 0) - return rxrpc_proto_abort("XAK", call, 0); + return rxrpc_proto_abort(call, 0, rxrpc_badmsg_short_ack); offset += sizeof(ack); ack_serial = sp->hdr.serial; @@ -844,7 +842,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) ioffset = offset + nr_acks + 3; if (skb->len >= ioffset + sizeof(info) && skb_copy_bits(skb, ioffset, &info, sizeof(info)) < 0) - return rxrpc_proto_abort("XAI", call, 0); + return rxrpc_proto_abort(call, 0, rxrpc_badmsg_short_ack_info); if (nr_acks > 0) skb_condense(skb); @@ -867,7 +865,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_input_ackinfo(call, skb, &info); if (first_soft_ack == 0) - return rxrpc_proto_abort("AK0", call, 0); + return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_zero); /* Ignore ACKs unless we are or have just been transmitting. */ switch (READ_ONCE(call->state)) { @@ -882,20 +880,20 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) if (before(hard_ack, call->acks_hard_ack) || after(hard_ack, call->tx_top)) - return rxrpc_proto_abort("AKW", call, 0); + return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_outside_window); if (nr_acks > call->tx_top - hard_ack) - return rxrpc_proto_abort("AKN", call, 0); + return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_sack_overflow); if (after(hard_ack, call->acks_hard_ack)) { if (rxrpc_rotate_tx_window(call, hard_ack, &summary)) { - rxrpc_end_tx_phase(call, false, "ETA"); + rxrpc_end_tx_phase(call, false, rxrpc_eproto_unexpected_ack); return; } } if (nr_acks > 0) { if (offset > (int)skb->len - nr_acks) - return rxrpc_proto_abort("XSA", call, 0); + return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_short_sack); rxrpc_input_soft_acks(call, skb->data + offset, first_soft_ack, nr_acks, &summary); } @@ -917,7 +915,7 @@ static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb) struct rxrpc_ack_summary summary = { 0 }; if (rxrpc_rotate_tx_window(call, call->tx_top, &summary)) - rxrpc_end_tx_phase(call, false, "ETL"); + rxrpc_end_tx_phase(call, false, rxrpc_eproto_unexpected_ackall); } /* @@ -962,27 +960,23 @@ void rxrpc_input_call_packet(struct rxrpc_call *call, struct sk_buff *skb) switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_DATA: - rxrpc_input_data(call, skb); - break; + return rxrpc_input_data(call, skb); case RXRPC_PACKET_TYPE_ACK: - rxrpc_input_ack(call, skb); - break; + return rxrpc_input_ack(call, skb); case RXRPC_PACKET_TYPE_BUSY: /* Just ignore BUSY packets from the server; the retry and * lifespan timers will take care of business. BUSY packets * from the client don't make sense. */ - break; + return; case RXRPC_PACKET_TYPE_ABORT: - rxrpc_input_abort(call, skb); - break; + return rxrpc_input_abort(call, skb); case RXRPC_PACKET_TYPE_ACKALL: - rxrpc_input_ackall(call, skb); - break; + return rxrpc_input_ackall(call, skb); default: break; @@ -1004,7 +998,8 @@ void rxrpc_implicit_end_call(struct rxrpc_call *call, struct sk_buff *skb) case RXRPC_CALL_COMPLETE: break; default: - rxrpc_abort_call("IMP", call, 0, RX_CALL_DEAD, -ESHUTDOWN); + rxrpc_abort_call(call, 0, RX_CALL_DEAD, -ESHUTDOWN, + rxrpc_eproto_improper_term); trace_rxrpc_improper_term(call); break; } diff --git a/net/rxrpc/insecure.c b/net/rxrpc/insecure.c index 29dcc7d3f51a..34353b6e584b 100644 --- a/net/rxrpc/insecure.c +++ b/net/rxrpc/insecure.c @@ -45,13 +45,15 @@ static void none_free_call_crypto(struct rxrpc_call *call) static int none_respond_to_challenge(struct rxrpc_connection *conn, struct sk_buff *skb) { - return rxrpc_abort_conn(conn, skb, RX_PROTOCOL_ERROR, -EPROTO, "RXN"); + return rxrpc_abort_conn(conn, skb, RX_PROTOCOL_ERROR, -EPROTO, + rxrpc_eproto_rxnull_challenge); } static int none_verify_response(struct rxrpc_connection *conn, struct sk_buff *skb) { - return rxrpc_abort_conn(conn, skb, RX_PROTOCOL_ERROR, -EPROTO, "RXN"); + return rxrpc_abort_conn(conn, skb, RX_PROTOCOL_ERROR, -EPROTO, + rxrpc_eproto_rxnull_response); } static void none_clear(struct rxrpc_connection *conn) diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index be23b2dc930f..f709f6f109db 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -73,10 +73,32 @@ void rxrpc_error_report(struct sock *sk) rcu_read_unlock(); } +/* + * Directly produce an abort from a packet. + */ +bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why, + s32 abort_code, int err) +{ + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + + trace_rxrpc_abort(0, why, sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, + abort_code, err); + skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; + skb->priority = abort_code; + return false; +} + +static bool rxrpc_bad_message(struct sk_buff *skb, enum rxrpc_abort_reason why) +{ + return rxrpc_direct_abort(skb, why, RX_PROTOCOL_ERROR, -EBADMSG); +} + +#define just_discard true + /* * Process event packets targeted at a local endpoint. */ -static void rxrpc_input_version(struct rxrpc_local *local, struct sk_buff *skb) +static bool rxrpc_input_version(struct rxrpc_local *local, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); char v; @@ -88,22 +110,21 @@ static void rxrpc_input_version(struct rxrpc_local *local, struct sk_buff *skb) if (v == 0) rxrpc_send_version_request(local, &sp->hdr, skb); } + + return true; } /* * Extract the wire header from a packet and translate the byte order. */ -static noinline -int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb) +static bool rxrpc_extract_header(struct rxrpc_skb_priv *sp, + struct sk_buff *skb) { struct rxrpc_wire_header whdr; /* dig out the RxRPC connection details */ - if (skb_copy_bits(skb, 0, &whdr, sizeof(whdr)) < 0) { - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, - tracepoint_string("bad_hdr")); - return -EBADMSG; - } + if (skb_copy_bits(skb, 0, &whdr, sizeof(whdr)) < 0) + return rxrpc_bad_message(skb, rxrpc_badmsg_short_hdr); memset(sp, 0, sizeof(*sp)); sp->hdr.epoch = ntohl(whdr.epoch); @@ -117,7 +138,7 @@ int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb) sp->hdr.securityIndex = whdr.securityIndex; sp->hdr._rsvd = ntohs(whdr._rsvd); sp->hdr.serviceId = ntohs(whdr.serviceId); - return 0; + return true; } /* @@ -137,28 +158,28 @@ static bool rxrpc_extract_abort(struct sk_buff *skb) /* * Process packets received on the local endpoint */ -static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) +static bool rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) { struct rxrpc_connection *conn; struct sockaddr_rxrpc peer_srx; struct rxrpc_skb_priv *sp; struct rxrpc_peer *peer = NULL; struct sk_buff *skb = *_skb; - int ret = 0; + bool ret = false; skb_pull(skb, sizeof(struct udphdr)); sp = rxrpc_skb(skb); /* dig out the RxRPC connection details */ - if (rxrpc_extract_header(sp, skb) < 0) - goto bad_message; + if (!rxrpc_extract_header(sp, skb)) + return just_discard; if (IS_ENABLED(CONFIG_AF_RXRPC_INJECT_LOSS)) { static int lose; if ((lose++ & 7) == 7) { trace_rxrpc_rx_lose(sp); - return 0; + return just_discard; } } @@ -167,28 +188,28 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) switch (sp->hdr.type) { case RXRPC_PACKET_TYPE_VERSION: if (rxrpc_to_client(sp)) - return 0; - rxrpc_input_version(local, skb); - return 0; + return just_discard; + return rxrpc_input_version(local, skb); case RXRPC_PACKET_TYPE_BUSY: if (rxrpc_to_server(sp)) - return 0; + return just_discard; fallthrough; case RXRPC_PACKET_TYPE_ACK: case RXRPC_PACKET_TYPE_ACKALL: if (sp->hdr.callNumber == 0) - goto bad_message; + return rxrpc_bad_message(skb, rxrpc_badmsg_zero_call); break; case RXRPC_PACKET_TYPE_ABORT: if (!rxrpc_extract_abort(skb)) - return 0; /* Just discard if malformed */ + return just_discard; /* Just discard if malformed */ break; case RXRPC_PACKET_TYPE_DATA: - if (sp->hdr.callNumber == 0 || - sp->hdr.seq == 0) - goto bad_message; + if (sp->hdr.callNumber == 0) + return rxrpc_bad_message(skb, rxrpc_badmsg_zero_call); + if (sp->hdr.seq == 0) + return rxrpc_bad_message(skb, rxrpc_badmsg_zero_seq); /* Unshare the packet so that it can be modified for in-place * decryption. @@ -198,7 +219,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) if (!skb) { rxrpc_eaten_skb(*_skb, rxrpc_skb_eaten_by_unshare_nomem); *_skb = NULL; - return 0; + return just_discard; } if (skb != *_skb) { @@ -212,28 +233,28 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) case RXRPC_PACKET_TYPE_CHALLENGE: if (rxrpc_to_server(sp)) - return 0; + return just_discard; break; case RXRPC_PACKET_TYPE_RESPONSE: if (rxrpc_to_client(sp)) - return 0; + return just_discard; break; /* Packet types 9-11 should just be ignored. */ case RXRPC_PACKET_TYPE_PARAMS: case RXRPC_PACKET_TYPE_10: case RXRPC_PACKET_TYPE_11: - return 0; + return just_discard; default: - goto bad_message; + return rxrpc_bad_message(skb, rxrpc_badmsg_unsupported_packet); } if (sp->hdr.serviceId == 0) - goto bad_message; + return rxrpc_bad_message(skb, rxrpc_badmsg_zero_service); if (WARN_ON_ONCE(rxrpc_extract_addr_from_skb(&peer_srx, skb) < 0)) - return true; /* Unsupported address type - discard. */ + return just_discard; /* Unsupported address type. */ if (peer_srx.transport.family != local->srx.transport.family && (peer_srx.transport.family == AF_INET && @@ -241,7 +262,7 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) pr_warn_ratelimited("AF_RXRPC: Protocol mismatch %u not %u\n", peer_srx.transport.family, local->srx.transport.family); - return true; /* Wrong address type - discard. */ + return just_discard; /* Wrong address type. */ } if (rxrpc_to_client(sp)) { @@ -249,12 +270,8 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) conn = rxrpc_find_client_connection_rcu(local, &peer_srx, skb); conn = rxrpc_get_connection_maybe(conn, rxrpc_conn_get_call_input); rcu_read_unlock(); - if (!conn) { - trace_rxrpc_abort(0, "NCC", sp->hdr.cid, - sp->hdr.callNumber, sp->hdr.seq, - RXKADINCONSISTENCY, EBADMSG); - goto protocol_error; - } + if (!conn) + return rxrpc_protocol_error(skb, rxrpc_eproto_no_client_conn); ret = rxrpc_input_packet_on_conn(conn, &peer_srx, skb); rxrpc_put_connection(conn, rxrpc_conn_put_call_input); @@ -287,18 +304,6 @@ static int rxrpc_input_packet(struct rxrpc_local *local, struct sk_buff **_skb) ret = rxrpc_new_incoming_call(local, peer, NULL, &peer_srx, skb); rxrpc_put_peer(peer, rxrpc_peer_put_input); - if (ret < 0) - goto reject_packet; - return 0; - -bad_message: - trace_rxrpc_abort(0, "BAD", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_PROTOCOL_ERROR, EBADMSG); -protocol_error: - skb->priority = RX_PROTOCOL_ERROR; - skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; -reject_packet: - rxrpc_reject_packet(local, skb); return ret; } @@ -313,21 +318,23 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, struct rxrpc_channel *chan; struct rxrpc_call *call = NULL; unsigned int channel; + bool ret; if (sp->hdr.securityIndex != conn->security_ix) - goto wrong_security; + return rxrpc_direct_abort(skb, rxrpc_eproto_wrong_security, + RXKADINCONSISTENCY, -EBADMSG); if (sp->hdr.serviceId != conn->service_id) { int old_id; if (!test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags)) - goto reupgrade; + return rxrpc_protocol_error(skb, rxrpc_eproto_reupgrade); + old_id = cmpxchg(&conn->service_id, conn->orig_service_id, sp->hdr.serviceId); - if (old_id != conn->orig_service_id && old_id != sp->hdr.serviceId) - goto reupgrade; + return rxrpc_protocol_error(skb, rxrpc_eproto_bad_upgrade); } if (after(sp->hdr.serial, conn->hi_serial)) @@ -343,19 +350,19 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, /* Ignore really old calls */ if (sp->hdr.callNumber < chan->last_call) - return 0; + return just_discard; if (sp->hdr.callNumber == chan->last_call) { if (chan->call || sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) - return 0; + return just_discard; /* For the previous service call, if completed successfully, we * discard all further packets. */ if (rxrpc_conn_is_service(conn) && chan->last_type == RXRPC_PACKET_TYPE_ACK) - return 0; + return just_discard; /* But otherwise we need to retransmit the final packet from * data cached in the connection record. @@ -366,7 +373,7 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, sp->hdr.serial, sp->hdr.flags); rxrpc_conn_retransmit_call(conn, skb, channel); - return 0; + return just_discard; } rcu_read_lock(); @@ -375,10 +382,9 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, rcu_read_unlock(); if (sp->hdr.callNumber > chan->call_id) { - if (rxrpc_to_client(sp)) { - rxrpc_put_call(call, rxrpc_call_put_input); - goto reject_packet; - } + if (rxrpc_to_client(sp)) + return rxrpc_protocol_error( + skb, rxrpc_eproto_unexpected_implicit_end); if (call) { rxrpc_implicit_end_call(call, skb); @@ -389,38 +395,14 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, if (!call) { if (rxrpc_to_client(sp)) - goto bad_message; - if (rxrpc_new_incoming_call(conn->local, conn->peer, conn, - peer_srx, skb)) - return 0; - goto reject_packet; + return rxrpc_protocol_error(skb, rxrpc_eproto_no_client_call); + return rxrpc_new_incoming_call(conn->local, conn->peer, conn, + peer_srx, skb); } - rxrpc_input_call_event(call, skb); + ret = rxrpc_input_call_event(call, skb); rxrpc_put_call(call, rxrpc_call_put_input); - return 0; - -wrong_security: - trace_rxrpc_abort(0, "SEC", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RXKADINCONSISTENCY, EBADMSG); - skb->priority = RXKADINCONSISTENCY; - goto post_abort; - -reupgrade: - trace_rxrpc_abort(0, "UPG", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_PROTOCOL_ERROR, EBADMSG); - goto protocol_error; - -bad_message: - trace_rxrpc_abort(0, "BAD", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_PROTOCOL_ERROR, EBADMSG); -protocol_error: - skb->priority = RX_PROTOCOL_ERROR; -post_abort: - skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; -reject_packet: - rxrpc_reject_packet(conn->local, skb); - return 0; + return ret; } /* @@ -476,7 +458,8 @@ int rxrpc_io_thread(void *data) switch (skb->mark) { case RXRPC_SKB_MARK_PACKET: skb->priority = 0; - rxrpc_input_packet(local, &skb); + if (!rxrpc_input_packet(local, &skb)) + rxrpc_reject_packet(local, skb); trace_rxrpc_rx_done(skb->mark, skb->priority); rxrpc_free_skb(skb, rxrpc_skb_put_input); break; diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 56a31aebea38..b7178f2150c0 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -118,8 +118,9 @@ bool rxrpc_call_completed(struct rxrpc_call *call) /* * Record that a call is locally aborted. */ -bool __rxrpc_abort_call(const char *why, struct rxrpc_call *call, - rxrpc_seq_t seq, u32 abort_code, int error) +bool __rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, + u32 abort_code, int error, + enum rxrpc_abort_reason why) { trace_rxrpc_abort(call->debug_id, why, call->cid, call->call_id, seq, abort_code, error); @@ -127,13 +128,14 @@ bool __rxrpc_abort_call(const char *why, struct rxrpc_call *call, abort_code, error); } -bool rxrpc_abort_call(const char *why, struct rxrpc_call *call, - rxrpc_seq_t seq, u32 abort_code, int error) +bool rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, + u32 abort_code, int error, + enum rxrpc_abort_reason why) { bool ret; spin_lock(&call->state_lock); - ret = __rxrpc_abort_call(why, call, seq, abort_code, error); + ret = __rxrpc_abort_call(call, seq, abort_code, error, why); spin_unlock(&call->state_lock); if (ret) rxrpc_send_abort_packet(call); @@ -641,11 +643,15 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call, return ret; short_data: - trace_rxrpc_rx_eproto(call, 0, tracepoint_string("short_data")); + trace_rxrpc_abort(call->debug_id, rxrpc_recvmsg_short_data, + call->cid, call->call_id, call->rx_consumed, + 0, -EBADMSG); ret = -EBADMSG; goto out; excess_data: - trace_rxrpc_rx_eproto(call, 0, tracepoint_string("excess_data")); + trace_rxrpc_abort(call->debug_id, rxrpc_recvmsg_excess_data, + call->cid, call->call_id, call->rx_consumed, + 0, -EMSGSIZE); ret = -EMSGSIZE; goto out; call_complete: diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index 5d2fbc6ec3cf..3dea0dc319a2 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -411,18 +411,15 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb, struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_crypt iv; struct scatterlist sg[16]; - bool aborted; u32 data_size, buf; u16 check; int ret; _enter(""); - if (sp->len < 8) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_1_hdr", "V1H", - RXKADSEALEDINCON); - goto protocol_error; - } + if (sp->len < 8) + return rxrpc_abort_eproto(call, skb, RXKADSEALEDINCON, + rxkad_abort_1_short_header); /* Decrypt the skbuff in-place. TODO: We really want to decrypt * directly into the target buffer. @@ -442,11 +439,9 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb, skcipher_request_zero(req); /* Extract the decrypted packet length */ - if (skb_copy_bits(skb, sp->offset, &sechdr, sizeof(sechdr)) < 0) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_1_len", "XV1", - RXKADDATALEN); - goto protocol_error; - } + if (skb_copy_bits(skb, sp->offset, &sechdr, sizeof(sechdr)) < 0) + return rxrpc_abort_eproto(call, skb, RXKADDATALEN, + rxkad_abort_1_short_encdata); sp->offset += sizeof(sechdr); sp->len -= sizeof(sechdr); @@ -456,26 +451,16 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb, check = buf >> 16; check ^= seq ^ call->call_id; check &= 0xffff; - if (check != 0) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_1_check", "V1C", - RXKADSEALEDINCON); - goto protocol_error; - } - - if (data_size > sp->len) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_1_datalen", "V1L", - RXKADDATALEN); - goto protocol_error; - } + if (check != 0) + return rxrpc_abort_eproto(call, skb, RXKADSEALEDINCON, + rxkad_abort_1_short_check); + if (data_size > sp->len) + return rxrpc_abort_eproto(call, skb, RXKADDATALEN, + rxkad_abort_1_short_data); sp->len = data_size; _leave(" = 0 [dlen=%x]", data_size); return 0; - -protocol_error: - if (aborted) - rxrpc_send_abort_packet(call); - return -EPROTO; } /* @@ -490,18 +475,15 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb, struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_crypt iv; struct scatterlist _sg[4], *sg; - bool aborted; u32 data_size, buf; u16 check; int nsg, ret; _enter(",{%d}", sp->len); - if (sp->len < 8) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_2_hdr", "V2H", - RXKADSEALEDINCON); - goto protocol_error; - } + if (sp->len < 8) + return rxrpc_abort_eproto(call, skb, RXKADSEALEDINCON, + rxkad_abort_2_short_header); /* Decrypt the skbuff in-place. TODO: We really want to decrypt * directly into the target buffer. @@ -513,7 +495,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb, } else { sg = kmalloc_array(nsg, sizeof(*sg), GFP_NOIO); if (!sg) - goto nomem; + return -ENOMEM; } sg_init_table(sg, nsg); @@ -537,11 +519,9 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb, kfree(sg); /* Extract the decrypted packet length */ - if (skb_copy_bits(skb, sp->offset, &sechdr, sizeof(sechdr)) < 0) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_2_len", "XV2", - RXKADDATALEN); - goto protocol_error; - } + if (skb_copy_bits(skb, sp->offset, &sechdr, sizeof(sechdr)) < 0) + return rxrpc_abort_eproto(call, skb, RXKADDATALEN, + rxkad_abort_2_short_len); sp->offset += sizeof(sechdr); sp->len -= sizeof(sechdr); @@ -551,30 +531,17 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb, check = buf >> 16; check ^= seq ^ call->call_id; check &= 0xffff; - if (check != 0) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_2_check", "V2C", - RXKADSEALEDINCON); - goto protocol_error; - } + if (check != 0) + return rxrpc_abort_eproto(call, skb, RXKADSEALEDINCON, + rxkad_abort_2_short_check); - if (data_size > sp->len) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_2_datalen", "V2L", - RXKADDATALEN); - goto protocol_error; - } + if (data_size > sp->len) + return rxrpc_abort_eproto(call, skb, RXKADDATALEN, + rxkad_abort_2_short_data); sp->len = data_size; _leave(" = 0 [dlen=%x]", data_size); return 0; - -protocol_error: - if (aborted) - rxrpc_send_abort_packet(call); - return -EPROTO; - -nomem: - _leave(" = -ENOMEM"); - return -ENOMEM; } /* @@ -590,7 +557,6 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb) __be32 buf[2]; } crypto __aligned(8); rxrpc_seq_t seq = sp->hdr.seq; - bool aborted; int ret; u16 cksum; u32 x, y; @@ -627,9 +593,9 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb) cksum = 1; /* zero checksums are not permitted */ if (cksum != sp->hdr.cksum) { - aborted = rxrpc_abort_eproto(call, skb, "rxkad_csum", "VCK", - RXKADSEALEDINCON); - goto protocol_error; + ret = rxrpc_abort_eproto(call, skb, RXKADSEALEDINCON, + rxkad_abort_bad_checksum); + goto out; } switch (call->conn->security_level) { @@ -647,13 +613,9 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb) break; } +out: skcipher_request_free(req); return ret; - -protocol_error: - if (aborted) - rxrpc_send_abort_packet(call); - return -EPROTO; } /* @@ -827,27 +789,24 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, struct rxkad_challenge challenge; struct rxkad_response *resp; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - const char *eproto; - u32 version, nonce, min_level, abort_code; - int ret; + u32 version, nonce, min_level; + int ret = -EPROTO; _enter("{%d,%x}", conn->debug_id, key_serial(conn->key)); - eproto = tracepoint_string("chall_no_key"); - abort_code = RX_PROTOCOL_ERROR; if (!conn->key) - goto protocol_error; + return rxrpc_abort_conn(conn, skb, RX_PROTOCOL_ERROR, -EPROTO, + rxkad_abort_chall_no_key); - abort_code = RXKADEXPIRED; ret = key_validate(conn->key); if (ret < 0) - goto other_error; + return rxrpc_abort_conn(conn, skb, RXKADEXPIRED, ret, + rxkad_abort_chall_key_expired); - eproto = tracepoint_string("chall_short"); - abort_code = RXKADPACKETSHORT; if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), &challenge, sizeof(challenge)) < 0) - goto protocol_error; + return rxrpc_abort_conn(conn, skb, RXKADPACKETSHORT, -EPROTO, + rxkad_abort_chall_short); version = ntohl(challenge.version); nonce = ntohl(challenge.nonce); @@ -855,15 +814,13 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, trace_rxrpc_rx_challenge(conn, sp->hdr.serial, version, nonce, min_level); - eproto = tracepoint_string("chall_ver"); - abort_code = RXKADINCONSISTENCY; if (version != RXKAD_VERSION) - goto protocol_error; + return rxrpc_abort_conn(conn, skb, RXKADINCONSISTENCY, -EPROTO, + rxkad_abort_chall_version); - abort_code = RXKADLEVELFAIL; - ret = -EACCES; if (conn->security_level < min_level) - goto other_error; + return rxrpc_abort_conn(conn, skb, RXKADLEVELFAIL, -EACCES, + rxkad_abort_chall_level); token = conn->key->payload.data[0]; @@ -892,13 +849,6 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, ret = rxkad_send_response(conn, &sp->hdr, resp, token->kad); kfree(resp); return ret; - -protocol_error: - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto); - ret = -EPROTO; -other_error: - rxrpc_abort_conn(conn, skb, abort_code, ret, "RXK"); - return ret; } /* @@ -912,16 +862,12 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, time64_t *_expiry) { struct skcipher_request *req; - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_crypt iv, key; struct scatterlist sg[1]; struct in_addr addr; unsigned int life; - const char *eproto; time64_t issue, now; bool little_endian; - int ret; - u32 abort_code; u8 *p, *q, *name, *end; _enter("{%d},{%x}", conn->debug_id, key_serial(server_key)); @@ -933,10 +879,9 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, memcpy(&iv, &server_key->payload.data[2], sizeof(iv)); - ret = -ENOMEM; req = skcipher_request_alloc(server_key->payload.data[0], GFP_NOFS); if (!req) - goto temporary_error; + return -ENOMEM; sg_init_one(&sg[0], ticket, ticket_len); skcipher_request_set_callback(req, 0, NULL, NULL); @@ -947,18 +892,21 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, p = ticket; end = p + ticket_len; -#define Z(field) \ - ({ \ - u8 *__str = p; \ - eproto = tracepoint_string("rxkad_bad_"#field); \ - q = memchr(p, 0, end - p); \ - if (!q || q - p > (field##_SZ)) \ - goto bad_ticket; \ - for (; p < q; p++) \ - if (!isprint(*p)) \ - goto bad_ticket; \ - p++; \ - __str; \ +#define Z(field, fieldl) \ + ({ \ + u8 *__str = p; \ + q = memchr(p, 0, end - p); \ + if (!q || q - p > (field##_SZ)) \ + return rxrpc_abort_conn( \ + conn, skb, RXKADBADTICKET, -EPROTO, \ + rxkad_abort_resp_tkt_##fieldl); \ + for (; p < q; p++) \ + if (!isprint(*p)) \ + return rxrpc_abort_conn( \ + conn, skb, RXKADBADTICKET, -EPROTO, \ + rxkad_abort_resp_tkt_##fieldl); \ + p++; \ + __str; \ }) /* extract the ticket flags */ @@ -967,20 +915,20 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, p++; /* extract the authentication name */ - name = Z(ANAME); + name = Z(ANAME, aname); _debug("KIV ANAME: %s", name); /* extract the principal's instance */ - name = Z(INST); + name = Z(INST, inst); _debug("KIV INST : %s", name); /* extract the principal's authentication domain */ - name = Z(REALM); + name = Z(REALM, realm); _debug("KIV REALM: %s", name); - eproto = tracepoint_string("rxkad_bad_len"); if (end - p < 4 + 8 + 4 + 2) - goto bad_ticket; + return rxrpc_abort_conn(conn, skb, RXKADBADTICKET, -EPROTO, + rxkad_abort_resp_tkt_short); /* get the IPv4 address of the entity that requested the ticket */ memcpy(&addr, p, sizeof(addr)); @@ -1012,37 +960,23 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, _debug("KIV ISSUE: %llx [%llx]", issue, now); /* check the ticket is in date */ - if (issue > now) { - abort_code = RXKADNOAUTH; - ret = -EKEYREJECTED; - goto other_error; - } - - if (issue < now - life) { - abort_code = RXKADEXPIRED; - ret = -EKEYEXPIRED; - goto other_error; - } + if (issue > now) + return rxrpc_abort_conn(conn, skb, RXKADNOAUTH, -EKEYREJECTED, + rxkad_abort_resp_tkt_future); + if (issue < now - life) + return rxrpc_abort_conn(conn, skb, RXKADEXPIRED, -EKEYEXPIRED, + rxkad_abort_resp_tkt_expired); *_expiry = issue + life; /* get the service name */ - name = Z(SNAME); + name = Z(SNAME, sname); _debug("KIV SNAME: %s", name); /* get the service instance name */ - name = Z(INST); + name = Z(INST, sinst); _debug("KIV SINST: %s", name); return 0; - -bad_ticket: - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto); - abort_code = RXKADBADTICKET; - ret = -EPROTO; -other_error: - return rxrpc_abort_conn(conn, skb, abort_code, ret, "RXK"); -temporary_error: - return ret; } /* @@ -1089,10 +1023,9 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_crypt session_key; struct key *server_key; - const char *eproto; time64_t expiry; void *ticket; - u32 abort_code, version, kvno, ticket_len, level; + u32 version, kvno, ticket_len, level; __be32 csum; int ret, i; @@ -1100,19 +1033,18 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, server_key = rxrpc_look_up_server_security(conn, skb, 0, 0); if (IS_ERR(server_key)) { - switch (PTR_ERR(server_key)) { + ret = PTR_ERR(server_key); + switch (ret) { case -ENOKEY: - abort_code = RXKADUNKNOWNKEY; - break; + return rxrpc_abort_conn(conn, skb, RXKADUNKNOWNKEY, ret, + rxkad_abort_resp_nokey); case -EKEYEXPIRED: - abort_code = RXKADEXPIRED; - break; + return rxrpc_abort_conn(conn, skb, RXKADEXPIRED, ret, + rxkad_abort_resp_key_expired); default: - abort_code = RXKADNOAUTH; - break; + return rxrpc_abort_conn(conn, skb, RXKADNOAUTH, ret, + rxkad_abort_resp_key_rejected); } - return rxrpc_abort_conn(conn, skb, abort_code, - PTR_ERR(server_key), "RXK"); } ret = -ENOMEM; @@ -1120,11 +1052,12 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, if (!response) goto temporary_error; - eproto = tracepoint_string("rxkad_rsp_short"); - abort_code = RXKADPACKETSHORT; if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header), - response, sizeof(*response)) < 0) + response, sizeof(*response)) < 0) { + rxrpc_abort_conn(conn, skb, RXKADPACKETSHORT, -EPROTO, + rxkad_abort_resp_short); goto protocol_error; + } version = ntohl(response->version); ticket_len = ntohl(response->ticket_len); @@ -1132,20 +1065,23 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, trace_rxrpc_rx_response(conn, sp->hdr.serial, version, kvno, ticket_len); - eproto = tracepoint_string("rxkad_rsp_ver"); - abort_code = RXKADINCONSISTENCY; - if (version != RXKAD_VERSION) + if (version != RXKAD_VERSION) { + rxrpc_abort_conn(conn, skb, RXKADINCONSISTENCY, -EPROTO, + rxkad_abort_resp_version); goto protocol_error; + } - eproto = tracepoint_string("rxkad_rsp_tktlen"); - abort_code = RXKADTICKETLEN; - if (ticket_len < 4 || ticket_len > MAXKRB5TICKETLEN) + if (ticket_len < 4 || ticket_len > MAXKRB5TICKETLEN) { + rxrpc_abort_conn(conn, skb, RXKADTICKETLEN, -EPROTO, + rxkad_abort_resp_tkt_len); goto protocol_error; + } - eproto = tracepoint_string("rxkad_rsp_unkkey"); - abort_code = RXKADUNKNOWNKEY; - if (kvno >= RXKAD_TKT_TYPE_KERBEROS_V5) + if (kvno >= RXKAD_TKT_TYPE_KERBEROS_V5) { + rxrpc_abort_conn(conn, skb, RXKADUNKNOWNKEY, -EPROTO, + rxkad_abort_resp_unknown_tkt); goto protocol_error; + } /* extract the kerberos ticket and decrypt and decode it */ ret = -ENOMEM; @@ -1153,12 +1089,12 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, if (!ticket) goto temporary_error_free_resp; - eproto = tracepoint_string("rxkad_tkt_short"); - abort_code = RXKADPACKETSHORT; - ret = skb_copy_bits(skb, sizeof(struct rxrpc_wire_header) + sizeof(*response), - ticket, ticket_len); - if (ret < 0) - goto temporary_error_free_ticket; + if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header) + sizeof(*response), + ticket, ticket_len) < 0) { + rxrpc_abort_conn(conn, skb, RXKADPACKETSHORT, -EPROTO, + rxkad_abort_resp_short_tkt); + goto protocol_error; + } ret = rxkad_decrypt_ticket(conn, server_key, skb, ticket, ticket_len, &session_key, &expiry); @@ -1169,56 +1105,66 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, * response */ rxkad_decrypt_response(conn, response, &session_key); - eproto = tracepoint_string("rxkad_rsp_param"); - abort_code = RXKADSEALEDINCON; - if (ntohl(response->encrypted.epoch) != conn->proto.epoch) - goto protocol_error_free; - if (ntohl(response->encrypted.cid) != conn->proto.cid) - goto protocol_error_free; - if (ntohl(response->encrypted.securityIndex) != conn->security_ix) + if (ntohl(response->encrypted.epoch) != conn->proto.epoch || + ntohl(response->encrypted.cid) != conn->proto.cid || + ntohl(response->encrypted.securityIndex) != conn->security_ix) { + rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, + rxkad_abort_resp_bad_param); goto protocol_error_free; + } + csum = response->encrypted.checksum; response->encrypted.checksum = 0; rxkad_calc_response_checksum(response); - eproto = tracepoint_string("rxkad_rsp_csum"); - if (response->encrypted.checksum != csum) + if (response->encrypted.checksum != csum) { + rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, + rxkad_abort_resp_bad_checksum); goto protocol_error_free; + } spin_lock(&conn->bundle->channel_lock); for (i = 0; i < RXRPC_MAXCALLS; i++) { struct rxrpc_call *call; u32 call_id = ntohl(response->encrypted.call_id[i]); - eproto = tracepoint_string("rxkad_rsp_callid"); - if (call_id > INT_MAX) + if (call_id > INT_MAX) { + rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, + rxkad_abort_resp_bad_callid); goto protocol_error_unlock; + } - eproto = tracepoint_string("rxkad_rsp_callctr"); - if (call_id < conn->channels[i].call_counter) + if (call_id < conn->channels[i].call_counter) { + rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, + rxkad_abort_resp_call_ctr); goto protocol_error_unlock; + } - eproto = tracepoint_string("rxkad_rsp_callst"); if (call_id > conn->channels[i].call_counter) { call = rcu_dereference_protected( conn->channels[i].call, lockdep_is_held(&conn->bundle->channel_lock)); - if (call && call->state < RXRPC_CALL_COMPLETE) + if (call && call->state < RXRPC_CALL_COMPLETE) { + rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, + rxkad_abort_resp_call_state); goto protocol_error_unlock; + } conn->channels[i].call_counter = call_id; } } spin_unlock(&conn->bundle->channel_lock); - eproto = tracepoint_string("rxkad_rsp_seq"); - abort_code = RXKADOUTOFSEQUENCE; - if (ntohl(response->encrypted.inc_nonce) != conn->rxkad.nonce + 1) + if (ntohl(response->encrypted.inc_nonce) != conn->rxkad.nonce + 1) { + rxrpc_abort_conn(conn, skb, RXKADOUTOFSEQUENCE, -EPROTO, + rxkad_abort_resp_ooseq); goto protocol_error_free; + } - eproto = tracepoint_string("rxkad_rsp_level"); - abort_code = RXKADLEVELFAIL; level = ntohl(response->encrypted.level); - if (level > RXRPC_SECURITY_ENCRYPT) + if (level > RXRPC_SECURITY_ENCRYPT) { + rxrpc_abort_conn(conn, skb, RXKADLEVELFAIL, -EPROTO, + rxkad_abort_resp_level); goto protocol_error_free; + } conn->security_level = level; /* create a key to hold the security data and expiration time - after @@ -1240,8 +1186,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, protocol_error: kfree(response); key_put(server_key); - trace_rxrpc_rx_eproto(NULL, sp->hdr.serial, eproto); - return rxrpc_abort_conn(conn, skb, abort_code, -EPROTO, "RXK"); + return -EPROTO; temporary_error_free_ticket: kfree(ticket); diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c index 66f5eea291ff..7ddf96e54418 100644 --- a/net/rxrpc/rxperf.c +++ b/net/rxrpc/rxperf.c @@ -10,6 +10,8 @@ #include #include #include +#define RXRPC_TRACE_ONLY_DEFINE_ENUMS +#include MODULE_DESCRIPTION("rxperf test server (afs)"); MODULE_AUTHOR("Red Hat, Inc."); @@ -307,12 +309,14 @@ static void rxperf_deliver_to_call(struct work_struct *work) case -EOPNOTSUPP: abort_code = RXGEN_OPCODE; rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, - abort_code, ret, "GOP"); + abort_code, ret, + rxperf_abort_op_not_supported); goto call_complete; case -ENOTSUPP: abort_code = RX_USER_ABORT; rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, - abort_code, ret, "GUA"); + abort_code, ret, + rxperf_abort_op_not_supported); goto call_complete; case -EIO: pr_err("Call %u in bad state %u\n", @@ -324,11 +328,13 @@ static void rxperf_deliver_to_call(struct work_struct *work) case -ENOMEM: case -EFAULT: rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, - RXGEN_SS_UNMARSHAL, ret, "GUM"); + RXGEN_SS_UNMARSHAL, ret, + rxperf_abort_unmarshal_error); goto call_complete; default: rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, - RX_CALL_DEAD, ret, "GER"); + RX_CALL_DEAD, ret, + rxperf_abort_general_error); goto call_complete; } } @@ -523,7 +529,8 @@ static int rxperf_process_call(struct rxperf_call *call) if (n == -ENOMEM) rxrpc_kernel_abort_call(rxperf_socket, call->rxcall, - RXGEN_SS_MARSHAL, -ENOMEM, "GOM"); + RXGEN_SS_MARSHAL, -ENOMEM, + rxperf_abort_oom); return n; } diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c index ab968f65a490..78af14694618 100644 --- a/net/rxrpc/security.c +++ b/net/rxrpc/security.c @@ -144,21 +144,15 @@ const struct rxrpc_security *rxrpc_get_incoming_security(struct rxrpc_sock *rx, sec = rxrpc_security_lookup(sp->hdr.securityIndex); if (!sec) { - trace_rxrpc_abort(0, "SVS", - sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_INVALID_OPERATION, EKEYREJECTED); - skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; - skb->priority = RX_INVALID_OPERATION; + rxrpc_direct_abort(skb, rxrpc_abort_unsupported_security, + RX_INVALID_OPERATION, -EKEYREJECTED); return NULL; } if (sp->hdr.securityIndex != RXRPC_SECURITY_NONE && !rx->securities) { - trace_rxrpc_abort(0, "SVR", - sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq, - RX_INVALID_OPERATION, EKEYREJECTED); - skb->mark = RXRPC_SKB_MARK_REJECT_ABORT; - skb->priority = sec->no_key_abort; + rxrpc_direct_abort(skb, rxrpc_abort_no_service_key, + sec->no_key_abort, -EKEYREJECTED); return NULL; } diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index c8cb3b76633c..cb40ff74c202 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -20,14 +20,15 @@ /* * Propose an abort to be made in the I/O thread. */ -bool rxrpc_propose_abort(struct rxrpc_call *call, - u32 abort_code, int error, const char *why) +bool rxrpc_propose_abort(struct rxrpc_call *call, s32 abort_code, int error, + enum rxrpc_abort_reason why) { - _enter("{%d},%d,%d,%s", call->debug_id, abort_code, error, why); + _enter("{%d},%d,%d,%u", call->debug_id, abort_code, error, why); if (!call->send_abort && call->state < RXRPC_CALL_COMPLETE) { call->send_abort_why = why; call->send_abort_err = error; + call->send_abort_seq = 0; smp_store_release(&call->send_abort, abort_code); rxrpc_poke_call(call, rxrpc_call_poke_abort); return true; @@ -679,7 +680,8 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) /* it's too late for this call */ ret = -ESHUTDOWN; } else if (p.command == RXRPC_CMD_SEND_ABORT) { - rxrpc_propose_abort(call, p.abort_code, -ECONNABORTED, "CMD"); + rxrpc_propose_abort(call, p.abort_code, -ECONNABORTED, + rxrpc_abort_call_sendmsg); ret = 0; } else if (p.command != RXRPC_CMD_SEND_DATA) { ret = -EINVAL; @@ -742,7 +744,9 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, break; default: /* Request phase complete for this client call */ - trace_rxrpc_rx_eproto(call, 0, tracepoint_string("late_send")); + trace_rxrpc_abort(call->debug_id, rxrpc_sendmsg_late_send, + call->cid, call->call_id, call->rx_consumed, + 0, -EPROTO); ret = -EPROTO; break; } @@ -760,17 +764,17 @@ EXPORT_SYMBOL(rxrpc_kernel_send_data); * @call: The call to be aborted * @abort_code: The abort code to stick into the ABORT packet * @error: Local error value - * @why: 3-char string indicating why. + * @why: Indication as to why. * * Allow a kernel service to abort a call, if it's still in an abortable state * and return true if the call was aborted, false if it was already complete. */ bool rxrpc_kernel_abort_call(struct socket *sock, struct rxrpc_call *call, - u32 abort_code, int error, const char *why) + u32 abort_code, int error, enum rxrpc_abort_reason why) { bool aborted; - _enter("{%d},%d,%d,%s", call->debug_id, abort_code, error, why); + _enter("{%d},%d,%d,%u", call->debug_id, abort_code, error, why); mutex_lock(&call->user_mutex); aborted = rxrpc_propose_abort(call, abort_code, error, why); From patchwork Tue Dec 6 16:01:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30409 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2906426wrr; Tue, 6 Dec 2022 08:05:42 -0800 (PST) X-Google-Smtp-Source: AA0mqf7/lAY2Coogn+OzAuZGwiA3Whxa60GdVjpYEPxTuStzDH6UPWzHSQTgiK0ZNSQhb8520VkF X-Received: by 2002:a17:906:74c6:b0:7c0:e381:a32a with SMTP id z6-20020a17090674c600b007c0e381a32amr10663264ejl.188.1670342741601; Tue, 06 Dec 2022 08:05:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342741; cv=none; d=google.com; s=arc-20160816; b=jOnVc6I5Tcu6AvrKAviK9pwki2mZMIxj5NGD0ZmwryFXbJCBxPYHl37ed5nUq2zoLM uHPjPmAAlSeAnZoOq7eZUflVZrAJNS/SRHGEDKcFvaEy14SGctX3MeZ+euMi+tJiuk0b iS7XFBDxoifz4ZklAD1FJFVVFmzH81ENpGX3QQRJUkdKnynJ8lug28pT9MhWMYuwiw37 PQkZOGn+bivjxpzsDGu3l0CF5fTLegQ2koBHfnt8xSlNAfo8TYojIdAiIyZl62nn8cmh awPmdwgCv8KB6pLhXMqIfOxgapRFRh1diXh+6HfLJ7WqwUPsbIwOexAZeNJV5oks4N0c P6NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=WW1ee118uoQ3cBoNOc38dhC0xhsUT8De7VLuxxcefi8=; b=F+C4PFhy0gs7xN4AmZK/ypGbxa/7CWfvxqhCyJ76qTzl0F0GciynavOwo+tTzJIyNN PEbGm3GdT1bdPmzfF7q/mBKlneYd2tPPk1DbZ9bI2LGUy7ggkXMyoFZMsafxEKLrGlu2 YjAvnGL8lpY8GI2wR3+O8TBDlCJnW0LgHT/vXaQ7f/WIZC587axa6tBLmfjA2cVq7ALq w8KBOMpjXuPUvhPeFsMM79xdU+6+0awv7Xh8K5GyXEtrCbBtkoK1jz8Pbw/JdlH5ByRp YHX825/MuxbKAgzGGB6yGHAmSnokTXMEBQa/axhno2tdh8OeyEFJTQpJsV/ZPbSp90QZ q7qQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TA07fwho; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dm10-20020a170907948a00b0078209ff9896si15405212ejc.509.2022.12.06.08.05.15; Tue, 06 Dec 2022 08:05:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TA07fwho; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235341AbiLFQEP (ORCPT + 99 others); Tue, 6 Dec 2022 11:04:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230093AbiLFQCU (ORCPT ); Tue, 6 Dec 2022 11:02:20 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44B312ED55 for ; Tue, 6 Dec 2022 08:01:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342476; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WW1ee118uoQ3cBoNOc38dhC0xhsUT8De7VLuxxcefi8=; b=TA07fwhoLU6H+pxVqlR/rxGUwsVzm5L8lTzTFfQpsDYC0mhO0BOMqbuirsw5MUP6RDjnos pPZrU80lyMvxDBUwoL3JZeBOXGMqibMoST/ZHhV0AmJv3WsAgzSXE2UNbEI93RSfG9XG5q /KpJnmrFEdkzoHLubbeDbXYawaISoIU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-581-4qjHR9I1MR2s_CtpT8wJvQ-1; Tue, 06 Dec 2022 11:01:13 -0500 X-MC-Unique: 4qjHR9I1MR2s_CtpT8wJvQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7FCC186EB21; Tue, 6 Dec 2022 16:01:12 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id C1777C15BA4; Tue, 6 Dec 2022 16:01:11 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 18/32] rxrpc: Stash the network namespace pointer in rxrpc_local From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:09 +0000 Message-ID: <167034246900.1105287.913631596294374964.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481310819381023?= X-GMAIL-MSGID: =?utf-8?q?1751481310819381023?= Stash the network namespace pointer in the rxrpc_local struct in addition to a pointer to the rxrpc-specific net namespace info. Use this to remove some places where the socket is passed as a parameter. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 8 ++++---- net/rxrpc/call_accept.c | 2 +- net/rxrpc/conn_client.c | 2 +- net/rxrpc/local_object.c | 7 ++++--- net/rxrpc/peer_object.c | 23 ++++++++++------------- 5 files changed, 20 insertions(+), 22 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index f2ab2d8044bd..1e6e11fb3a90 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -282,7 +282,8 @@ struct rxrpc_local { struct rcu_head rcu; atomic_t active_users; /* Number of users of the local endpoint */ refcount_t ref; /* Number of references to the structure */ - struct rxrpc_net *rxnet; /* The network ns in which this resides */ + struct net *net; /* The network namespace */ + struct rxrpc_net *rxnet; /* Our bits in the network namespace */ struct hlist_node link; struct socket *socket; /* my UDP socket */ struct task_struct *io_thread; @@ -1088,12 +1089,11 @@ void rxrpc_peer_keepalive_worker(struct work_struct *); */ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *, const struct sockaddr_rxrpc *); -struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *, struct rxrpc_local *, +struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *, struct sockaddr_rxrpc *, gfp_t); struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t, enum rxrpc_peer_trace); -void rxrpc_new_incoming_peer(struct rxrpc_sock *, struct rxrpc_local *, - struct rxrpc_peer *); +void rxrpc_new_incoming_peer(struct rxrpc_local *, struct rxrpc_peer *); void rxrpc_destroy_all_peers(struct rxrpc_net *); struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *, enum rxrpc_peer_trace); struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *, enum rxrpc_peer_trace); diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index 5fa81bb4abfa..a132d486dea0 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -280,7 +280,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, (peer_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); - rxrpc_new_incoming_peer(rx, local, peer); + rxrpc_new_incoming_peer(local, peer); } /* Now allocate and set up the connection */ diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index ec8913de42c9..4cfd8845df90 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -378,7 +378,7 @@ static struct rxrpc_bundle *rxrpc_prep_call(struct rxrpc_sock *rx, _enter("{%d,%lx},", call->debug_id, call->user_call_ID); - cp->peer = rxrpc_lookup_peer(rx, cp->local, srx, gfp); + cp->peer = rxrpc_lookup_peer(cp->local, srx, gfp); if (!cp->peer) goto error; diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 3ab68ae68954..3843418ea90f 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -85,7 +85,7 @@ static long rxrpc_local_cmp_key(const struct rxrpc_local *local, /* * Allocate a new local endpoint. */ -static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, +static struct rxrpc_local *rxrpc_alloc_local(struct net *net, const struct sockaddr_rxrpc *srx) { struct rxrpc_local *local; @@ -94,7 +94,8 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet, if (local) { refcount_set(&local->ref, 1); atomic_set(&local->active_users, 1); - local->rxnet = rxnet; + local->net = net; + local->rxnet = rxrpc_net(net); INIT_HLIST_NODE(&local->link); #ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY skb_queue_head_init(&local->rx_delay_queue); @@ -249,7 +250,7 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net, goto found; } - local = rxrpc_alloc_local(rxnet, srx); + local = rxrpc_alloc_local(net, srx); if (!local) goto nomem; diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c index 608946dcc505..c3035f44e0ca 100644 --- a/net/rxrpc/peer_object.c +++ b/net/rxrpc/peer_object.c @@ -147,10 +147,10 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *local, * assess the MTU size for the network interface through which this peer is * reached */ -static void rxrpc_assess_MTU_size(struct rxrpc_sock *rx, +static void rxrpc_assess_MTU_size(struct rxrpc_local *local, struct rxrpc_peer *peer) { - struct net *net = sock_net(&rx->sk); + struct net *net = local->net; struct dst_entry *dst; struct rtable *rt; struct flowi fl; @@ -236,11 +236,11 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp, /* * Initialise peer record. */ -static void rxrpc_init_peer(struct rxrpc_sock *rx, struct rxrpc_peer *peer, +static void rxrpc_init_peer(struct rxrpc_local *local, struct rxrpc_peer *peer, unsigned long hash_key) { peer->hash_key = hash_key; - rxrpc_assess_MTU_size(rx, peer); + rxrpc_assess_MTU_size(local, peer); peer->mtu = peer->if_mtu; peer->rtt_last_req = ktime_get_real(); @@ -272,8 +272,7 @@ static void rxrpc_init_peer(struct rxrpc_sock *rx, struct rxrpc_peer *peer, /* * Set up a new peer. */ -static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx, - struct rxrpc_local *local, +static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_local *local, struct sockaddr_rxrpc *srx, unsigned long hash_key, gfp_t gfp) @@ -285,7 +284,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx, peer = rxrpc_alloc_peer(local, gfp, rxrpc_peer_new_client); if (peer) { memcpy(&peer->srx, srx, sizeof(*srx)); - rxrpc_init_peer(rx, peer, hash_key); + rxrpc_init_peer(local, peer, hash_key); } _leave(" = %p", peer); @@ -304,14 +303,13 @@ static void rxrpc_free_peer(struct rxrpc_peer *peer) * since we've already done a search in the list from the non-reentrant context * (the data_ready handler) that is the only place we can add new peers. */ -void rxrpc_new_incoming_peer(struct rxrpc_sock *rx, struct rxrpc_local *local, - struct rxrpc_peer *peer) +void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer) { struct rxrpc_net *rxnet = local->rxnet; unsigned long hash_key; hash_key = rxrpc_peer_hash_key(local, &peer->srx); - rxrpc_init_peer(rx, peer, hash_key); + rxrpc_init_peer(local, peer, hash_key); spin_lock(&rxnet->peer_hash_lock); hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key); @@ -322,8 +320,7 @@ void rxrpc_new_incoming_peer(struct rxrpc_sock *rx, struct rxrpc_local *local, /* * obtain a remote transport endpoint for the specified address */ -struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, - struct rxrpc_local *local, +struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local, struct sockaddr_rxrpc *srx, gfp_t gfp) { struct rxrpc_peer *peer, *candidate; @@ -343,7 +340,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx, /* The peer is not yet present in hash - create a candidate * for a new record and then redo the search. */ - candidate = rxrpc_create_peer(rx, local, srx, hash_key, gfp); + candidate = rxrpc_create_peer(local, srx, hash_key, gfp); if (!candidate) { _leave(" = NULL [nomem]"); return NULL; From patchwork Tue Dec 6 16:01:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30412 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2906994wrr; Tue, 6 Dec 2022 08:06:27 -0800 (PST) X-Google-Smtp-Source: AA0mqf5jS+nFsw6LA08ivoOakBAJZYooS98GJRXhdnE3A5XTwAhWl1u7NhFkrstXGqw6YMBT1rkL X-Received: by 2002:a17:906:fd6:b0:7c0:aff2:9cb8 with SMTP id c22-20020a1709060fd600b007c0aff29cb8mr20326904ejk.125.1670342787685; Tue, 06 Dec 2022 08:06:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342787; cv=none; d=google.com; s=arc-20160816; b=zt2G1IVZr/sXx1h1BOzOVfof5bB1J/xDO6Pb2YJTGG/79KzMzM3kIiN7DLpIqSwxZW eKmBJ8GtwMREG8uHPhuK9r+2UqLpJDvorkWhYgRnPBSwe0zVSbN2I1PnTXw6coNN6rEt xpQExZHxuuDD3afkXXpktmgBwv3YzaKWkl3Jd5NivhUcNCAOB+jAQQX3WZgWoUODB811 a0vba9O3NFRYDLnZVHKzaOni25r1OUB9XZI+e91D2hZuQhQYnQHN/e+7/tbUKWDp9ufe vfElQbVsVvxnRnpHK4XSWDKy69T73Dt5fxa7Af+wDl4iLmPSn7I9vk3eiquT1JNoMrU/ qJ1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=xyMJU0zPv3CwtJEFI3fq9r//RZkVzS7vkDP3RQzQTFw=; b=b4qVd9AFZX/YZRsQYEz+I3gdbJJ1k9XN5e5r3rgkbyoYs6oKzdXPpxo2zRduKrMTQc gTmOjwVNnvxsFHnmzYu91vlkSqwQ8I+mKYW9J9hMX6idxredNYjS8PO7JuIXm/puZpoY UledaClW/h7/sfvUvV/CdKNDNURk2r9SVZV3IFui58W3PCfzcMlXSIflu9XI3G17hqgY OuGAj2tLjLGQYplvU6eIQWHfLgihzwtAgNUhGKyr0OxwlrnPC9lwVGItE+cmcCCjsniZ e9M6w8IqfhYcUAaRodbV/AxVsWq6JMkGKHYcpEB5VILlXgbWn3NhntEVVJFtCTp7YqPs zf6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="FPJAN/4V"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id du15-20020a17090772cf00b00780488c11bbsi16376105ejc.388.2022.12.06.08.05.57; Tue, 06 Dec 2022 08:06:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="FPJAN/4V"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235345AbiLFQER (ORCPT + 99 others); Tue, 6 Dec 2022 11:04:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbiLFQC2 (ORCPT ); Tue, 6 Dec 2022 11:02:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90D14248E1 for ; Tue, 6 Dec 2022 08:01:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342486; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xyMJU0zPv3CwtJEFI3fq9r//RZkVzS7vkDP3RQzQTFw=; b=FPJAN/4VVRchepG7zmkJO0H76O3JM0XehAq2rdApsNLlusQh6mqfDwfUbseI8fzv1lhfXX qITcANsNYko7HpapIN75Vkvo8Zo3SvLOiel2xv+Guf5UXrhi/l3QVHvwttHr/LuTv6kze6 G26fCZI/8Vhxs4J/ULDluxxHxjKJliQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-636-E3lObi4COeiWzPfraYzPIQ-1; Tue, 06 Dec 2022 11:01:21 -0500 X-MC-Unique: E3lObi4COeiWzPfraYzPIQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1631D86EB22; Tue, 6 Dec 2022 16:01:21 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 578C01121315; Tue, 6 Dec 2022 16:01:20 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 19/32] rxrpc: Make the set of connection IDs per local endpoint From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:17 +0000 Message-ID: <167034247767.1105287.2181920855570975717.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481358912271687?= X-GMAIL-MSGID: =?utf-8?q?1751481358912271687?= Make the set of connection IDs per local endpoint so that endpoints don't cause each other's connections to get dismissed. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/af_rxrpc.c | 8 -------- net/rxrpc/ar-internal.h | 5 +++-- net/rxrpc/conn_client.c | 44 +++++++++++++++++++------------------------- net/rxrpc/conn_object.c | 6 +++--- net/rxrpc/local_object.c | 10 ++++++++++ 5 files changed, 35 insertions(+), 38 deletions(-) diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index b9dadc89fede..c82af5ebc2b3 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -957,16 +957,9 @@ static const struct net_proto_family rxrpc_family_ops = { static int __init af_rxrpc_init(void) { int ret = -1; - unsigned int tmp; BUILD_BUG_ON(sizeof(struct rxrpc_skb_priv) > sizeof_field(struct sk_buff, cb)); - get_random_bytes(&tmp, sizeof(tmp)); - tmp &= 0x3fffffff; - if (tmp == 0) - tmp = 1; - idr_set_cursor(&rxrpc_client_conn_ids, tmp); - ret = -ENOMEM; rxrpc_call_jar = kmem_cache_create( "rxrpc_call_jar", sizeof(struct rxrpc_call), 0, @@ -1062,7 +1055,6 @@ static void __exit af_rxrpc_exit(void) * are released. */ rcu_barrier(); - rxrpc_destroy_client_conn_ids(); destroy_workqueue(rxrpc_workqueue); rxrpc_exit_security(); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 1e6e11fb3a90..4f4dc3d49c35 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -301,6 +301,8 @@ struct rxrpc_local { int debug_id; /* debug ID for printks */ bool dead; bool service_closed; /* Service socket closed */ + struct idr conn_ids; /* List of connection IDs */ + spinlock_t conn_lock; /* Lock for client connection pool */ struct sockaddr_rxrpc srx; /* local address */ }; @@ -888,9 +890,8 @@ static inline bool rxrpc_is_client_call(const struct rxrpc_call *call) extern unsigned int rxrpc_reap_client_connections; extern unsigned long rxrpc_conn_idle_client_expiry; extern unsigned long rxrpc_conn_idle_client_fast_expiry; -extern struct idr rxrpc_client_conn_ids; -void rxrpc_destroy_client_conn_ids(void); +void rxrpc_destroy_client_conn_ids(struct rxrpc_local *); struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); void rxrpc_put_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); int rxrpc_connect_call(struct rxrpc_sock *, struct rxrpc_call *, diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 4cfd8845df90..66aca478290e 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -34,12 +34,6 @@ __read_mostly unsigned int rxrpc_reap_client_connections = 900; __read_mostly unsigned long rxrpc_conn_idle_client_expiry = 2 * 60 * HZ; __read_mostly unsigned long rxrpc_conn_idle_client_fast_expiry = 2 * HZ; -/* - * We use machine-unique IDs for our client connections. - */ -DEFINE_IDR(rxrpc_client_conn_ids); -static DEFINE_SPINLOCK(rxrpc_conn_id_lock); - static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle); /* @@ -51,65 +45,65 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle); static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, gfp_t gfp) { - struct rxrpc_net *rxnet = conn->rxnet; + struct rxrpc_local *local = conn->local; int id; _enter(""); idr_preload(gfp); - spin_lock(&rxrpc_conn_id_lock); + spin_lock(&local->conn_lock); - id = idr_alloc_cyclic(&rxrpc_client_conn_ids, conn, + id = idr_alloc_cyclic(&local->conn_ids, conn, 1, 0x40000000, GFP_NOWAIT); if (id < 0) goto error; - spin_unlock(&rxrpc_conn_id_lock); + spin_unlock(&local->conn_lock); idr_preload_end(); - conn->proto.epoch = rxnet->epoch; + conn->proto.epoch = local->rxnet->epoch; conn->proto.cid = id << RXRPC_CIDSHIFT; set_bit(RXRPC_CONN_HAS_IDR, &conn->flags); _leave(" [CID %x]", conn->proto.cid); return 0; error: - spin_unlock(&rxrpc_conn_id_lock); + spin_unlock(&local->conn_lock); idr_preload_end(); _leave(" = %d", id); return id; } /* - * Release a connection ID for a client connection from the global pool. + * Release a connection ID for a client connection. */ -static void rxrpc_put_client_connection_id(struct rxrpc_connection *conn) +static void rxrpc_put_client_connection_id(struct rxrpc_local *local, + struct rxrpc_connection *conn) { if (test_bit(RXRPC_CONN_HAS_IDR, &conn->flags)) { - spin_lock(&rxrpc_conn_id_lock); - idr_remove(&rxrpc_client_conn_ids, - conn->proto.cid >> RXRPC_CIDSHIFT); - spin_unlock(&rxrpc_conn_id_lock); + spin_lock(&local->conn_lock); + idr_remove(&local->conn_ids, conn->proto.cid >> RXRPC_CIDSHIFT); + spin_unlock(&local->conn_lock); } } /* * Destroy the client connection ID tree. */ -void rxrpc_destroy_client_conn_ids(void) +void rxrpc_destroy_client_conn_ids(struct rxrpc_local *local) { struct rxrpc_connection *conn; int id; - if (!idr_is_empty(&rxrpc_client_conn_ids)) { - idr_for_each_entry(&rxrpc_client_conn_ids, conn, id) { + if (!idr_is_empty(&local->conn_ids)) { + idr_for_each_entry(&local->conn_ids, conn, id) { pr_err("AF_RXRPC: Leaked client conn %p {%d}\n", conn, refcount_read(&conn->ref)); } BUG(); } - idr_destroy(&rxrpc_client_conn_ids); + idr_destroy(&local->conn_ids); } /* @@ -225,7 +219,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) return conn; error_1: - rxrpc_put_client_connection_id(conn); + rxrpc_put_client_connection_id(bundle->local, conn); error_0: kfree(conn); _leave(" = %d", ret); @@ -257,7 +251,7 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn) * times the maximum number of client conns away from the current * allocation point to try and keep the IDs concentrated. */ - id_cursor = idr_get_cursor(&rxrpc_client_conn_ids); + id_cursor = idr_get_cursor(&conn->local->conn_ids); id = conn->proto.cid >> RXRPC_CIDSHIFT; distance = id - id_cursor; if (distance < 0) @@ -982,7 +976,7 @@ void rxrpc_kill_client_conn(struct rxrpc_connection *conn) trace_rxrpc_client(conn, -1, rxrpc_client_cleanup); atomic_dec(&rxnet->nr_client_conns); - rxrpc_put_client_connection_id(conn); + rxrpc_put_client_connection_id(local, conn); } /* diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index b3cb85e0ed70..52fad0744239 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -100,10 +100,10 @@ struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *lo _enter(",%x", sp->hdr.cid & RXRPC_CIDMASK); - /* Look up client connections by connection ID alone as their IDs are - * unique for this machine. + /* Look up client connections by connection ID alone as their + * IDs are unique for this machine. */ - conn = idr_find(&rxrpc_client_conn_ids, sp->hdr.cid >> RXRPC_CIDSHIFT); + conn = idr_find(&local->conn_ids, sp->hdr.cid >> RXRPC_CIDSHIFT); if (!conn || refcount_read(&conn->ref) == 0) { _debug("no conn"); goto not_found; diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 3843418ea90f..57697adf52c6 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -89,6 +89,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct net *net, const struct sockaddr_rxrpc *srx) { struct rxrpc_local *local; + u32 tmp; local = kzalloc(sizeof(struct rxrpc_local), GFP_KERNEL); if (local) { @@ -110,6 +111,14 @@ static struct rxrpc_local *rxrpc_alloc_local(struct net *net, local->debug_id = atomic_inc_return(&rxrpc_debug_id); memcpy(&local->srx, srx, sizeof(*srx)); local->srx.srx_service = 0; + idr_init(&local->conn_ids); + get_random_bytes(&tmp, sizeof(tmp)); + tmp &= 0x3fffffff; + if (tmp == 0) + tmp = 1; + idr_set_cursor(&local->conn_ids, tmp); + spin_lock_init(&local->conn_lock); + trace_rxrpc_local(local->debug_id, rxrpc_local_new, 1, 1); } @@ -411,6 +420,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) rxrpc_purge_queue(&local->rx_delay_queue); #endif rxrpc_purge_queue(&local->rx_queue); + rxrpc_destroy_client_conn_ids(local); } /* From patchwork Tue Dec 6 16:01:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30410 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2906734wrr; Tue, 6 Dec 2022 08:06:06 -0800 (PST) X-Google-Smtp-Source: AA0mqf4lOK82gn94f3CRaf029vDXIVKBOm9lcdYo2gLDUZZmsY8b+TDgLuLaRDbcutzIcFsneqUA X-Received: by 2002:a17:906:138d:b0:7c1:a0b:c0a2 with SMTP id f13-20020a170906138d00b007c10a0bc0a2mr3241021ejc.568.1670342766416; Tue, 06 Dec 2022 08:06:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342766; cv=none; d=google.com; s=arc-20160816; b=hU+D7sdKkIc5RftdRVCQWRA5aQ0ZMmkezozQ83q5O7Chir6RJ2wJNFts/DqCPhCyuD OzBuCMno6HcdPUdhmkyqmEzl8Hxwo515uq1vVaiqjfsf0NORPEWLhgdYnXM1wFHMTqZs gMMFokOdcWVOr+Lb77nVCdB+Xb0+6dRVjyl2DouZXfR3VYCzmHGzznMXorqpco5CqwPW iXTVemVtMqhQvkrZjg9FVbdda8ET1sTfqLCKnl4ObgzqzExhVj2j0WOI+dv01HOU6RL4 6jiD9S336P/1as2JMgZSdiJr/Q1uV7Dx6F0lPguJA2mu+rPHY2Hv0Ab0F2CyMxTAEwjv bxpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=gKRLpVEUwHNwp00j44YemD4GrzDTsfSMu14l1PMgPMI=; b=Y6T99WBHRTq/JbxPwUECLm0YN4j+XAPp8WHabivvBoR1RgS0Q7r5Zc/Eq0+wQzlsSZ XstlPG96gN0/Kgj8VbodyEnDMGYpB4Rb8LxLsQY6GdasUIxXr1V0BEFf6MiXJTkWFxTq gkTH10zBJ1ewsfBEeoNn1tIu1L6zpY3Jx8RqxLL7eZ7HUDDaQDPoM4I0xMKprngBwDh1 I3EkloaFXVH6B62ITlXwGIMBn1dxdhBx5EYWfTULR4dADYANnM6K3p/Q+kj9Qt2GGbkN A3i6a2VhYkTbFfYPREbyF2ChxtsrMjfilE5rp9kokKoyReMu9Y/OZp5pAzB0/F+8vVL5 dIBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="M/hqnD0M"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g13-20020a056402090d00b0045d050f0fa3si2490087edz.565.2022.12.06.08.05.38; Tue, 06 Dec 2022 08:06:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="M/hqnD0M"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234597AbiLFQEY (ORCPT + 99 others); Tue, 6 Dec 2022 11:04:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234577AbiLFQCf (ORCPT ); Tue, 6 Dec 2022 11:02:35 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DECB927FFB for ; Tue, 6 Dec 2022 08:01:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342491; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gKRLpVEUwHNwp00j44YemD4GrzDTsfSMu14l1PMgPMI=; b=M/hqnD0MgEPmavA5vCpeJQJO+UuyPBys2NOtZPmWuRufaNQeVLmE1onPHsYwTxI6RaeeCJ IFvOMG1JA5aLwQRlIjXd7RJkvZDqYAP0L8/Zq2AeaSA9NW7/YXKEMR0Gu49qOJsPGRsH4f 3p+2hhlje6aTawo/7xyTivtfZ5E6JP8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-61-8utbsERJOdWksMqxd58Ymw-1; Tue, 06 Dec 2022 11:01:28 -0500 X-MC-Unique: 8utbsERJOdWksMqxd58Ymw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9B2B8858F17; Tue, 6 Dec 2022 16:01:27 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id DBBBB140EBF5; Tue, 6 Dec 2022 16:01:26 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 20/32] rxrpc: Offload the completion of service conn security to the I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:26 +0000 Message-ID: <167034248623.1105287.17936732760243344472.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481336345511382?= X-GMAIL-MSGID: =?utf-8?q?1751481336345511382?= Offload the completion of the challenge/response cycle on a service connection to the I/O thread. After the RESPONSE packet has been successfully decrypted and verified by the work queue, offloading the changing of the call states to the I/O thread makes iteration over the conn's channel list simpler. Do this by marking the RESPONSE skbuff and putting it onto the receive queue for the I/O thread to collect. We put it on the front of the queue as we've already received the packet for it. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 2 ++ net/rxrpc/ar-internal.h | 1 + net/rxrpc/conn_event.c | 46 +++++++++++++++++++++++++++++------------- net/rxrpc/io_thread.c | 5 +++++ 4 files changed, 40 insertions(+), 14 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 76d0bdc3e170..f4a1ca03fb1a 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -133,6 +133,7 @@ #define rxrpc_skb_traces \ EM(rxrpc_skb_eaten_by_unshare, "ETN unshare ") \ EM(rxrpc_skb_eaten_by_unshare_nomem, "ETN unshar-nm") \ + EM(rxrpc_skb_get_conn_secured, "GET conn-secd") \ EM(rxrpc_skb_get_conn_work, "GET conn-work") \ EM(rxrpc_skb_get_local_work, "GET locl-work") \ EM(rxrpc_skb_get_reject_work, "GET rej-work ") \ @@ -142,6 +143,7 @@ EM(rxrpc_skb_new_error_report, "NEW error-rpt") \ EM(rxrpc_skb_new_jumbo_subpacket, "NEW jumbo-sub") \ EM(rxrpc_skb_new_unshared, "NEW unshared ") \ + EM(rxrpc_skb_put_conn_secured, "PUT conn-secd") \ EM(rxrpc_skb_put_conn_work, "PUT conn-work") \ EM(rxrpc_skb_put_error_report, "PUT error-rep") \ EM(rxrpc_skb_put_input, "PUT input ") \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 4f4dc3d49c35..508b6e2c26e1 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -38,6 +38,7 @@ struct rxrpc_txbuf; enum rxrpc_skb_mark { RXRPC_SKB_MARK_PACKET, /* Received packet */ RXRPC_SKB_MARK_ERROR, /* Error notification */ + RXRPC_SKB_MARK_SERVICE_CONN_SECURED, /* Service connection response has been verified */ RXRPC_SKB_MARK_REJECT_BUSY, /* Reject with BUSY */ RXRPC_SKB_MARK_REJECT_ABORT, /* Reject with ABORT (code in skb->priority) */ }; diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 21c826161741..434fa7e2f515 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -248,7 +248,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - int loop, ret; + int ret; if (conn->state == RXRPC_CONN_ABORTED) return -ECONNABORTED; @@ -269,22 +269,21 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, if (ret < 0) return ret; - spin_lock(&conn->bundle->channel_lock); spin_lock(&conn->state_lock); + if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) + smp_store_release(&conn->state, RXRPC_CONN_SERVICE); + spin_unlock(&conn->state_lock); - if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) { - conn->state = RXRPC_CONN_SERVICE; - spin_unlock(&conn->state_lock); - for (loop = 0; loop < RXRPC_MAXCALLS; loop++) - rxrpc_call_is_secure( - rcu_dereference_protected( - conn->channels[loop].call, - lockdep_is_held(&conn->bundle->channel_lock))); - } else { - spin_unlock(&conn->state_lock); + if (conn->state == RXRPC_CONN_SERVICE) { + /* Offload call state flipping to the I/O thread. As + * we've already received the packet, put it on the + * front of the queue. + */ + skb->mark = RXRPC_SKB_MARK_SERVICE_CONN_SECURED; + rxrpc_get_skb(skb, rxrpc_skb_get_conn_secured); + skb_queue_head(&conn->local->rx_queue, skb); + rxrpc_wake_up_io_thread(conn->local); } - - spin_unlock(&conn->bundle->channel_lock); return 0; default: @@ -442,9 +441,28 @@ bool rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb) */ void rxrpc_input_conn_event(struct rxrpc_connection *conn, struct sk_buff *skb) { + unsigned int loop; + if (test_and_clear_bit(RXRPC_CONN_EV_ABORT_CALLS, &conn->events)) rxrpc_abort_calls(conn); + switch (skb->mark) { + case RXRPC_SKB_MARK_SERVICE_CONN_SECURED: + if (conn->state != RXRPC_CONN_SERVICE) + break; + + spin_lock(&conn->bundle->channel_lock); + + for (loop = 0; loop < RXRPC_MAXCALLS; loop++) + rxrpc_call_is_secure( + rcu_dereference_protected( + conn->channels[loop].call, + lockdep_is_held(&conn->bundle->channel_lock))); + + spin_unlock(&conn->bundle->channel_lock); + break; + } + /* Process delayed ACKs whose time has come. */ if (conn->flags & RXRPC_CONN_FINAL_ACK_MASK) rxrpc_process_delayed_final_acks(conn, false); diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index f709f6f109db..a89f8ba465b1 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -455,6 +455,7 @@ int rxrpc_io_thread(void *data) /* Process received packets and errors. */ if ((skb = __skb_dequeue(&rx_queue))) { + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); switch (skb->mark) { case RXRPC_SKB_MARK_PACKET: skb->priority = 0; @@ -467,6 +468,10 @@ int rxrpc_io_thread(void *data) rxrpc_input_error(local, skb); rxrpc_free_skb(skb, rxrpc_skb_put_error_report); break; + case RXRPC_SKB_MARK_SERVICE_CONN_SECURED: + rxrpc_input_conn_event(sp->conn, skb); + rxrpc_put_connection(sp->conn, rxrpc_conn_put_poke); + rxrpc_free_skb(skb, rxrpc_skb_put_conn_secured); break; default: WARN_ON_ONCE(1); From patchwork Tue Dec 6 16:01:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30414 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2908197wrr; Tue, 6 Dec 2022 08:08:11 -0800 (PST) X-Google-Smtp-Source: AA0mqf56J2NUJeqlBUrOA0lCQxAQhS5/mDTfhRqLsZg5Xwasl24zQ7zQLcmmjGmiKag3kGNVyQpP X-Received: by 2002:a17:902:aa0c:b0:188:d7f5:20df with SMTP id be12-20020a170902aa0c00b00188d7f520dfmr70135301plb.120.1670342890776; Tue, 06 Dec 2022 08:08:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342890; cv=none; d=google.com; s=arc-20160816; b=TTfR7uxeKgdgoipnTsJk8TDlC0Ghaufs1LZQfz+6YV8MxyySLk6BnncAe2IvrHTfX4 WdfAXB8BMYFodhce2bz7LMcOj7GRVytTOFnxrIy7lZxiXeDO1/B7C+NqdgH7WxPHyyNj s59Pz0kXJXWuTa76VdhCVlAnd785BDf8nyL9I4jeGqxjWqWlMnbdCcA0fQJQ9r37b6tv ddtoNQItTBDNry/2mtxxCg8py+lMESI+rqQI1d34DHjZTU2M3iF5oc1gUMFBtIZFR5Zn reZ8cc+84Few/8EqcEoi77LujuEseBJWRzNjJc7BdoIJ3wxnG4p5gT384/uxYzdITBOd QmRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=6mJ9+fX3MtZ7UcZilDUgKOvaMmOXS9oZEon2gUhdVno=; b=oy3RD76jzUh/YKL7HX70inmt9OEvNeVFZdRTxvWJgnPLyN5n7b3CX/5plI8WzVUwE5 Ne/2vq02bcoKcAprVa8SJ/V/Au5Sp7bk/XMr8ZbF6YUz9N0Wpt87JPbCzZC+1GhelWiL 52Z7ssGcuc7skqBaH+oi9R+wIhPqA0T4NTOOURFQXdwfH+qbtCs0ERU8wW2wG8776KZX vvs+ugau7FfbXY8iuCv780O0ZNhqhmAk/UN16DZpXo0ffApLFwgAmhXRmHS2H5/pBOZ6 7bUuT22I4tukxUBwf/zxPRe7vtvf39T6JqBndgeb6borHcqO0rNNKfWrLH7+zTJbtcHT K4IA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="A/KCxFw2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fy3-20020a17090b020300b002193e1a906esi20396943pjb.133.2022.12.06.08.07.38; Tue, 06 Dec 2022 08:08:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="A/KCxFw2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235373AbiLFQE2 (ORCPT + 99 others); Tue, 6 Dec 2022 11:04:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234156AbiLFQCi (ORCPT ); Tue, 6 Dec 2022 11:02:38 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B50B286F6 for ; Tue, 6 Dec 2022 08:01:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342498; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6mJ9+fX3MtZ7UcZilDUgKOvaMmOXS9oZEon2gUhdVno=; b=A/KCxFw2eCp/kITWHIs+Xya3KyW8vDV5Z0cjgk75FtGqOCiGDv0wL1cbvYbQMNfsUvt7s6 Cnqf5weGP7JEzN7h1CTfgGMd86xehm+BPK2HWsOezJ/e/JLFMfFDy7WpxF2QjMXtI7l8LA UhQAGcMrET2OXIJSarcBAYZn0RxtDwc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-340--HsrBxPFNeOsHzQLyXFNZg-1; Tue, 06 Dec 2022 11:01:36 -0500 X-MC-Unique: -HsrBxPFNeOsHzQLyXFNZg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D4233810D3F; Tue, 6 Dec 2022 16:01:36 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9609E2024CBE; Tue, 6 Dec 2022 16:01:35 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 21/32] rxrpc: Set up a connection bundle from a call, not rxrpc_conn_parameters From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:32 +0000 Message-ID: <167034249278.1105287.16452411592972795770.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481466795026420?= X-GMAIL-MSGID: =?utf-8?q?1751481466795026420?= Use the information now stored in struct rxrpc_call to configure the connection bundle and thence the connection, rather than using the rxrpc_conn_parameters struct. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 3 - net/rxrpc/af_rxrpc.c | 1 net/rxrpc/ar-internal.h | 9 ++- net/rxrpc/call_object.c | 4 + net/rxrpc/conn_client.c | 132 +++++++++++++++++++++--------------------- net/rxrpc/sendmsg.c | 1 6 files changed, 76 insertions(+), 74 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index f4a1ca03fb1a..4d5eb08025c0 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -185,7 +185,6 @@ #define rxrpc_peer_traces \ EM(rxrpc_peer_free, "FREE ") \ EM(rxrpc_peer_get_accept, "GET accept ") \ - EM(rxrpc_peer_get_activate_call, "GET act-call") \ EM(rxrpc_peer_get_bundle, "GET bundle ") \ EM(rxrpc_peer_get_client_conn, "GET cln-conn") \ EM(rxrpc_peer_get_input, "GET input ") \ @@ -198,7 +197,6 @@ EM(rxrpc_peer_put_bundle, "PUT bundle ") \ EM(rxrpc_peer_put_call, "PUT call ") \ EM(rxrpc_peer_put_conn, "PUT conn ") \ - EM(rxrpc_peer_put_discard_tmp, "PUT disc-tmp") \ EM(rxrpc_peer_put_input, "PUT input ") \ EM(rxrpc_peer_put_input_error, "PUT inpt-err") \ E_(rxrpc_peer_put_keepalive, "PUT keepaliv") @@ -208,6 +206,7 @@ EM(rxrpc_bundle_get_client_call, "GET clt-call") \ EM(rxrpc_bundle_get_client_conn, "GET clt-conn") \ EM(rxrpc_bundle_get_service_conn, "GET svc-conn") \ + EM(rxrpc_bundle_put_call, "PUT call ") \ EM(rxrpc_bundle_put_conn, "PUT conn ") \ EM(rxrpc_bundle_put_discard, "PUT discard ") \ E_(rxrpc_bundle_new, "NEW ") diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index c82af5ebc2b3..7446b7bd5490 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -328,7 +328,6 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, mutex_unlock(&call->user_mutex); } - rxrpc_put_peer(cp.peer, rxrpc_peer_put_discard_tmp); _leave(" = %p", call); return call; } diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 508b6e2c26e1..c0704a983055 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -361,7 +361,6 @@ struct rxrpc_conn_proto { struct rxrpc_conn_parameters { struct rxrpc_local *local; /* Representation of local endpoint */ - struct rxrpc_peer *peer; /* Remote endpoint */ struct key *key; /* Security details */ bool exclusive; /* T if conn is exclusive */ bool upgrade; /* T if service ID can be upgraded */ @@ -429,6 +428,7 @@ struct rxrpc_bundle { struct rxrpc_local *local; /* Representation of local endpoint */ struct rxrpc_peer *peer; /* Remote endpoint */ struct key *key; /* Security details */ + const struct rxrpc_security *security; /* applied security module */ refcount_t ref; atomic_t active; /* Number of active users */ unsigned int debug_id; @@ -594,6 +594,7 @@ enum rxrpc_congest_mode { struct rxrpc_call { struct rcu_head rcu; struct rxrpc_connection *conn; /* connection carrying call */ + struct rxrpc_bundle *bundle; /* Connection bundle to use */ struct rxrpc_peer *peer; /* Peer record for remote address */ struct rxrpc_local *local; /* Representation of local endpoint */ struct rxrpc_sock __rcu *socket; /* socket responsible */ @@ -895,16 +896,16 @@ extern unsigned long rxrpc_conn_idle_client_fast_expiry; void rxrpc_destroy_client_conn_ids(struct rxrpc_local *); struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); void rxrpc_put_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); -int rxrpc_connect_call(struct rxrpc_sock *, struct rxrpc_call *, - struct rxrpc_conn_parameters *, struct sockaddr_rxrpc *, - gfp_t); +int rxrpc_connect_call(struct rxrpc_call *, gfp_t); void rxrpc_expose_client_call(struct rxrpc_call *); void rxrpc_disconnect_client_call(struct rxrpc_bundle *, struct rxrpc_call *); +void rxrpc_deactivate_bundle(struct rxrpc_bundle *); void rxrpc_put_client_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); void rxrpc_discard_expired_client_conns(struct work_struct *); void rxrpc_destroy_all_client_connections(struct rxrpc_net *); void rxrpc_clean_up_local_conns(struct rxrpc_local *); + /* * conn_event.c */ diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index cd491fd83c0d..6a28956deb9f 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -366,7 +366,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, /* Set up or get a connection record and set the protocol parameters, * including channel number and call ID. */ - ret = rxrpc_connect_call(rx, call, cp, srx, gfp); + ret = rxrpc_connect_call(call, gfp); if (ret < 0) goto error_attached_to_socket; @@ -660,6 +660,8 @@ static void rxrpc_destroy_call(struct work_struct *work) rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned); rxrpc_put_connection(call->conn, rxrpc_conn_put_call); + rxrpc_deactivate_bundle(call->bundle); + rxrpc_put_bundle(call->bundle, rxrpc_bundle_put_call); rxrpc_put_peer(call->peer, rxrpc_peer_put_call); rxrpc_put_local(call->local, rxrpc_local_put_call); call_rcu(&call->rcu, rxrpc_rcu_free_call); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 66aca478290e..37d8d3349e81 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -34,7 +34,10 @@ __read_mostly unsigned int rxrpc_reap_client_connections = 900; __read_mostly unsigned long rxrpc_conn_idle_client_expiry = 2 * 60 * HZ; __read_mostly unsigned long rxrpc_conn_idle_client_fast_expiry = 2 * HZ; -static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle); +static void rxrpc_activate_bundle(struct rxrpc_bundle *bundle) +{ + atomic_inc(&bundle->active); +} /* * Get a connection ID and epoch for a client connection from the global pool. @@ -109,20 +112,21 @@ void rxrpc_destroy_client_conn_ids(struct rxrpc_local *local) /* * Allocate a connection bundle. */ -static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp, +static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_call *call, gfp_t gfp) { struct rxrpc_bundle *bundle; bundle = kzalloc(sizeof(*bundle), gfp); if (bundle) { - bundle->local = cp->local; - bundle->peer = rxrpc_get_peer(cp->peer, rxrpc_peer_get_bundle); - bundle->key = cp->key; - bundle->exclusive = cp->exclusive; - bundle->upgrade = cp->upgrade; - bundle->service_id = cp->service_id; - bundle->security_level = cp->security_level; + bundle->local = call->local; + bundle->peer = rxrpc_get_peer(call->peer, rxrpc_peer_get_bundle); + bundle->key = key_get(call->key); + bundle->security = call->security; + bundle->exclusive = test_bit(RXRPC_CALL_EXCLUSIVE, &call->flags); + bundle->upgrade = test_bit(RXRPC_CALL_UPGRADE, &call->flags); + bundle->service_id = call->dest_srx.srx_service; + bundle->security_level = call->security_level; refcount_set(&bundle->ref, 1); atomic_set(&bundle->active, 1); spin_lock_init(&bundle->channel_lock); @@ -146,19 +150,23 @@ static void rxrpc_free_bundle(struct rxrpc_bundle *bundle) { trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_free); rxrpc_put_peer(bundle->peer, rxrpc_peer_put_bundle); + key_put(bundle->key); kfree(bundle); } void rxrpc_put_bundle(struct rxrpc_bundle *bundle, enum rxrpc_bundle_trace why) { - unsigned int id = bundle->debug_id; + unsigned int id; bool dead; int r; - dead = __refcount_dec_and_test(&bundle->ref, &r); - trace_rxrpc_bundle(id, r - 1, why); - if (dead) - rxrpc_free_bundle(bundle); + if (bundle) { + id = bundle->debug_id; + dead = __refcount_dec_and_test(&bundle->ref, &r); + trace_rxrpc_bundle(id, r - 1, why); + if (dead) + rxrpc_free_bundle(bundle); + } } /* @@ -272,20 +280,23 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn) * Look up the conn bundle that matches the connection parameters, adding it if * it doesn't yet exist. */ -static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *cp, - gfp_t gfp) +static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t gfp) { static atomic_t rxrpc_bundle_id; struct rxrpc_bundle *bundle, *candidate; - struct rxrpc_local *local = cp->local; + struct rxrpc_local *local = call->local; struct rb_node *p, **pp, *parent; long diff; + bool upgrade = test_bit(RXRPC_CALL_UPGRADE, &call->flags); _enter("{%px,%x,%u,%u}", - cp->peer, key_serial(cp->key), cp->security_level, cp->upgrade); + call->peer, key_serial(call->key), call->security_level, + upgrade); - if (cp->exclusive) - return rxrpc_alloc_bundle(cp, gfp); + if (test_bit(RXRPC_CALL_EXCLUSIVE, &call->flags)) { + call->bundle = rxrpc_alloc_bundle(call, gfp); + return call->bundle; + } /* First, see if the bundle is already there. */ _debug("search 1"); @@ -294,11 +305,11 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c while (p) { bundle = rb_entry(p, struct rxrpc_bundle, local_node); -#define cmp(X) ((long)bundle->X - (long)cp->X) - diff = (cmp(peer) ?: - cmp(key) ?: - cmp(security_level) ?: - cmp(upgrade)); +#define cmp(X, Y) ((long)X - (long)Y) + diff = (cmp(bundle->peer, call->peer) ?: + cmp(bundle->key, call->key) ?: + cmp(bundle->security_level, call->security_level) ?: + cmp(bundle->upgrade, upgrade)); #undef cmp if (diff < 0) p = p->rb_left; @@ -311,9 +322,9 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c _debug("not found"); /* It wasn't. We need to add one. */ - candidate = rxrpc_alloc_bundle(cp, gfp); + candidate = rxrpc_alloc_bundle(call, gfp); if (!candidate) - return NULL; + return ERR_PTR(-ENOMEM); _debug("search 2"); spin_lock(&local->client_bundles_lock); @@ -323,11 +334,11 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c parent = *pp; bundle = rb_entry(parent, struct rxrpc_bundle, local_node); -#define cmp(X) ((long)bundle->X - (long)cp->X) - diff = (cmp(peer) ?: - cmp(key) ?: - cmp(security_level) ?: - cmp(upgrade)); +#define cmp(X, Y) ((long)X - (long)Y) + diff = (cmp(bundle->peer, call->peer) ?: + cmp(bundle->key, call->key) ?: + cmp(bundle->security_level, call->security_level) ?: + cmp(bundle->upgrade, upgrade)); #undef cmp if (diff < 0) pp = &(*pp)->rb_left; @@ -341,19 +352,19 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c candidate->debug_id = atomic_inc_return(&rxrpc_bundle_id); rb_link_node(&candidate->local_node, parent, pp); rb_insert_color(&candidate->local_node, &local->client_bundles); - rxrpc_get_bundle(candidate, rxrpc_bundle_get_client_call); + call->bundle = rxrpc_get_bundle(candidate, rxrpc_bundle_get_client_call); spin_unlock(&local->client_bundles_lock); - _leave(" = %u [new]", candidate->debug_id); - return candidate; + _leave(" = B=%u [new]", call->bundle->debug_id); + return call->bundle; found_bundle_free: rxrpc_free_bundle(candidate); found_bundle: - rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_call); - atomic_inc(&bundle->active); + call->bundle = rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_call); + rxrpc_activate_bundle(bundle); spin_unlock(&local->client_bundles_lock); - _leave(" = %u [found]", bundle->debug_id); - return bundle; + _leave(" = B=%u [found]", call->bundle->debug_id); + return call->bundle; } /* @@ -362,31 +373,25 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c * If we return with a connection, the call will be on its waiting list. It's * left to the caller to assign a channel and wake up the call. */ -static struct rxrpc_bundle *rxrpc_prep_call(struct rxrpc_sock *rx, - struct rxrpc_call *call, - struct rxrpc_conn_parameters *cp, - struct sockaddr_rxrpc *srx, - gfp_t gfp) +static struct rxrpc_bundle *rxrpc_prep_call(struct rxrpc_call *call, gfp_t gfp) { struct rxrpc_bundle *bundle; _enter("{%d,%lx},", call->debug_id, call->user_call_ID); - cp->peer = rxrpc_lookup_peer(cp->local, srx, gfp); - if (!cp->peer) + call->peer = rxrpc_lookup_peer(call->local, &call->dest_srx, gfp); + if (!call->peer) goto error; call->tx_last_sent = ktime_get_real(); - call->cong_ssthresh = cp->peer->cong_ssthresh; + call->cong_ssthresh = call->peer->cong_ssthresh; if (call->cong_cwnd >= call->cong_ssthresh) call->cong_mode = RXRPC_CALL_CONGEST_AVOIDANCE; else call->cong_mode = RXRPC_CALL_SLOW_START; - if (cp->upgrade) - __set_bit(RXRPC_CALL_UPGRADE, &call->flags); /* Find the client connection bundle. */ - bundle = rxrpc_look_up_bundle(cp, gfp); + bundle = rxrpc_look_up_bundle(call, gfp); if (!bundle) goto error; @@ -449,7 +454,7 @@ static void rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, gfp_t gfp) if (old) trace_rxrpc_client(old, -1, rxrpc_client_replace); candidate->bundle_shift = shift; - atomic_inc(&bundle->active); + rxrpc_activate_bundle(bundle); bundle->conns[i] = candidate; for (j = 0; j < RXRPC_MAXCALLS; j++) set_bit(shift + j, &bundle->avail_chans); @@ -541,7 +546,6 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, rxrpc_see_call(call, rxrpc_call_see_activate_client); list_del_init(&call->chan_wait_link); - call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call); call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call); call->cid = conn->proto.cid | channel; call->call_id = call_id; @@ -705,21 +709,18 @@ static int rxrpc_wait_for_channel(struct rxrpc_bundle *bundle, * find a connection for a call * - called in process context with IRQs enabled */ -int rxrpc_connect_call(struct rxrpc_sock *rx, - struct rxrpc_call *call, - struct rxrpc_conn_parameters *cp, - struct sockaddr_rxrpc *srx, - gfp_t gfp) +int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp) { struct rxrpc_bundle *bundle; - struct rxrpc_net *rxnet = cp->local->rxnet; + struct rxrpc_local *local = call->local; + struct rxrpc_net *rxnet = local->rxnet; int ret = 0; _enter("{%d,%lx},", call->debug_id, call->user_call_ID); rxrpc_discard_expired_client_conns(&rxnet->client_conn_reaper); - bundle = rxrpc_prep_call(rx, call, cp, srx, gfp); + bundle = rxrpc_prep_call(call, gfp); if (IS_ERR(bundle)) { ret = PTR_ERR(bundle); goto out; @@ -735,9 +736,6 @@ int rxrpc_connect_call(struct rxrpc_sock *rx, /* Paired with the write barrier in rxrpc_activate_one_channel(). */ smp_rmb(); -out_put_bundle: - rxrpc_deactivate_bundle(bundle); - rxrpc_put_bundle(bundle, rxrpc_bundle_get_client_call); out: _leave(" = %d", ret); return ret; @@ -755,7 +753,7 @@ int rxrpc_connect_call(struct rxrpc_sock *rx, trace_rxrpc_client(call->conn, ret, rxrpc_client_chan_wait_failed); rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 0, ret); rxrpc_disconnect_client_call(bundle, call); - goto out_put_bundle; + goto out; } /* @@ -945,11 +943,15 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn) /* * Drop the active count on a bundle. */ -static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) +void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) { - struct rxrpc_local *local = bundle->local; + struct rxrpc_local *local; bool need_put = false; + if (!bundle) + return; + + local = bundle->local; if (atomic_dec_and_lock(&bundle->active, &local->client_bundles_lock)) { if (!bundle->exclusive) { _debug("erase bundle"); diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index cb40ff74c202..51c676bf03a8 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -560,7 +560,6 @@ rxrpc_new_client_call_for_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, atomic_inc_return(&rxrpc_debug_id)); /* The socket is now unlocked */ - rxrpc_put_peer(cp.peer, rxrpc_peer_put_discard_tmp); _leave(" = %p\n", call); return call; } From patchwork Tue Dec 6 16:01:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30413 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2907396wrr; Tue, 6 Dec 2022 08:07:03 -0800 (PST) X-Google-Smtp-Source: AA0mqf4DJoLMPM+jOM1C34ldvnvgr9TmniJdUFyA8FPeZR07hDsY5k2te+2pbUW659YfpWSiHTmP X-Received: by 2002:a17:907:3e11:b0:7c0:7b2b:db17 with SMTP id hp17-20020a1709073e1100b007c07b2bdb17mr32026899ejc.312.1670342822948; Tue, 06 Dec 2022 08:07:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342822; cv=none; d=google.com; s=arc-20160816; b=G03isviE4XCj96oL1uxnMCW1TDgo0nYm7PPfU/Q1wJwAOqRJVKfk9HNc2HQShb8mpE /d19JcqUxhQ8EYtn24A5PcjASPdzRgMWKAtdXiSnZK1alpRywsM9vcg92pzstDXrzEEJ BR/Flc9SpLS4gru4Iv5JiZ+VMk315kcoAnoGmP3U+rwk5TGZjr3/pHZXA1hxMs3cYr3i L3T1kciIyoYEx1Pzime0lkbFKuXQnluQocUpWYJfQ2U76VuFn+2LA/Mkbet7p6YKuYBi BYfTsF4ytePhFWz0Zfts9clgURG7AAsa13T2Okifb06qCNmlRDs0SEFhiE8HSJbK5L5F NhqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=Zh+CUvgeim6a0paY65hFPVdBjOgf/SnX+dj5UVXnCX8=; b=zGLR4ZuL2j8ff7qDEKNNAEuonOAFgC/Qq35NMF1numF76GqSdGrw9gohX73k+CfwXV 6sRsjsaA8ZLozpsT51OFGvQEHAwJxXOGfZ8Elokhzru4oFwsa1XKwPjXukjVmNgKHrYV R7Ieb4cyNjYV6KLYPQ/g3fJ3Ef/xoPbehmizHTRlw7D9w8vJDMj6fUYytv/BoNCKg3TJ RlAv1zczqRkXY9pyL4UFolTQ7gfvoDGgPkpaJmH2D3vJUUYh3GyiV4nVnlEPCWJdHzbp 595HDM2hjBhBIaXOV5vsdLTRiq91IpjgfuUTKZb5Q9We36EXdyjlGiKSWA0UkN9JsJub QRCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=LuOvnUhX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id du7-20020a17090772c700b007919fc02ce8si2776590ejc.971.2022.12.06.08.06.29; Tue, 06 Dec 2022 08:07:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=LuOvnUhX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231249AbiLFQEz (ORCPT + 99 others); Tue, 6 Dec 2022 11:04:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234736AbiLFQCu (ORCPT ); Tue, 6 Dec 2022 11:02:50 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BDAE2F3A9 for ; Tue, 6 Dec 2022 08:01:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342509; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zh+CUvgeim6a0paY65hFPVdBjOgf/SnX+dj5UVXnCX8=; b=LuOvnUhXsdua11gBTPWidY2jbxaxAbmKhf1WLI2+IyaCxLjpgCQMkIYzdGHngVEPq1utBM ylLm4rG8ADIpOZMfzKVWNlIdmKiHY+MotF4KzZd0/DzCUB3anwNqvsZAsclF38RPXDZpws Iwg8PxC442rDzixOfCR8v8hfBGCoPhE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-172-UA5juY0RNz6tHGaN38G1Zg-1; Tue, 06 Dec 2022 11:01:45 -0500 X-MC-Unique: UA5juY0RNz6tHGaN38G1Zg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 652FE3C0F421; Tue, 6 Dec 2022 16:01:45 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E1DA40C213F; Tue, 6 Dec 2022 16:01:44 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 22/32] rxrpc: Split out the call state changing functions into their own file From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:41 +0000 Message-ID: <167034250159.1105287.10034224411568767550.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481396336585636?= X-GMAIL-MSGID: =?utf-8?q?1751481396336585636?= Split out the functions that change the state of an rxrpc call into their own file. The idea being to remove anything to do with changing the state of a call directly from the rxrpc sendmsg() and recvmsg() paths and have all that done in the I/O thread only, with the ultimate aim of removing the state lock entirely. Moving the code out of sendmsg.c and recvmsg.c makes that easier to manage. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/Makefile | 1 + net/rxrpc/ar-internal.h | 16 +++++--- net/rxrpc/call_state.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++ net/rxrpc/recvmsg.c | 84 ------------------------------------------- 4 files changed, 103 insertions(+), 90 deletions(-) create mode 100644 net/rxrpc/call_state.c diff --git a/net/rxrpc/Makefile b/net/rxrpc/Makefile index e76d3459d78e..ac5caf5a48e1 100644 --- a/net/rxrpc/Makefile +++ b/net/rxrpc/Makefile @@ -10,6 +10,7 @@ rxrpc-y := \ call_accept.o \ call_event.o \ call_object.o \ + call_state.o \ conn_client.o \ conn_event.o \ conn_object.o \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index c0704a983055..a95e161bd980 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -886,6 +886,16 @@ static inline bool rxrpc_is_client_call(const struct rxrpc_call *call) return !rxrpc_is_service_call(call); } +/* + * call_state.c + */ +bool __rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); +bool rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); +bool __rxrpc_call_completed(struct rxrpc_call *); +bool rxrpc_call_completed(struct rxrpc_call *); +bool __rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); +bool rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); + /* * conn_client.c */ @@ -1115,12 +1125,6 @@ extern const struct seq_operations rxrpc_local_seq_ops; * recvmsg.c */ void rxrpc_notify_socket(struct rxrpc_call *); -bool __rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); -bool rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); -bool __rxrpc_call_completed(struct rxrpc_call *); -bool rxrpc_call_completed(struct rxrpc_call *); -bool __rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); -bool rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int); /* diff --git a/net/rxrpc/call_state.c b/net/rxrpc/call_state.c new file mode 100644 index 000000000000..24d240773a6f --- /dev/null +++ b/net/rxrpc/call_state.c @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Call state changing functions. + * + * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include "ar-internal.h" + +/* + * Transition a call to the complete state. + */ +bool __rxrpc_set_call_completion(struct rxrpc_call *call, + enum rxrpc_call_completion compl, + u32 abort_code, + int error) +{ + if (call->state < RXRPC_CALL_COMPLETE) { + call->abort_code = abort_code; + call->error = error; + call->completion = compl; + /* Allow reader of completion state to operate locklessly */ + smp_store_release(&call->state, RXRPC_CALL_COMPLETE); + trace_rxrpc_call_complete(call); + wake_up(&call->waitq); + rxrpc_notify_socket(call); + return true; + } + return false; +} + +bool rxrpc_set_call_completion(struct rxrpc_call *call, + enum rxrpc_call_completion compl, + u32 abort_code, + int error) +{ + bool ret = false; + + if (call->state < RXRPC_CALL_COMPLETE) { + spin_lock(&call->state_lock); + ret = __rxrpc_set_call_completion(call, compl, abort_code, error); + spin_unlock(&call->state_lock); + } + return ret; +} + +/* + * Record that a call successfully completed. + */ +bool __rxrpc_call_completed(struct rxrpc_call *call) +{ + return __rxrpc_set_call_completion(call, RXRPC_CALL_SUCCEEDED, 0, 0); +} + +bool rxrpc_call_completed(struct rxrpc_call *call) +{ + bool ret = false; + + if (call->state < RXRPC_CALL_COMPLETE) { + spin_lock(&call->state_lock); + ret = __rxrpc_call_completed(call); + spin_unlock(&call->state_lock); + } + return ret; +} + +/* + * Record that a call is locally aborted. + */ +bool __rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, + u32 abort_code, int error, + enum rxrpc_abort_reason why) +{ + trace_rxrpc_abort(call->debug_id, why, call->cid, call->call_id, seq, + abort_code, error); + return __rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED, + abort_code, error); +} + +bool rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, + u32 abort_code, int error, + enum rxrpc_abort_reason why) +{ + bool ret; + + spin_lock(&call->state_lock); + ret = __rxrpc_abort_call(call, seq, abort_code, error, why); + spin_unlock(&call->state_lock); + if (ret) + rxrpc_send_abort_packet(call); + return ret; +} diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index b7178f2150c0..c163c103aa1d 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -58,90 +58,6 @@ void rxrpc_notify_socket(struct rxrpc_call *call) _leave(""); } -/* - * Transition a call to the complete state. - */ -bool __rxrpc_set_call_completion(struct rxrpc_call *call, - enum rxrpc_call_completion compl, - u32 abort_code, - int error) -{ - if (call->state < RXRPC_CALL_COMPLETE) { - call->abort_code = abort_code; - call->error = error; - call->completion = compl; - /* Allow reader of completion state to operate locklessly */ - smp_store_release(&call->state, RXRPC_CALL_COMPLETE); - trace_rxrpc_call_complete(call); - wake_up(&call->waitq); - rxrpc_notify_socket(call); - return true; - } - return false; -} - -bool rxrpc_set_call_completion(struct rxrpc_call *call, - enum rxrpc_call_completion compl, - u32 abort_code, - int error) -{ - bool ret = false; - - if (call->state < RXRPC_CALL_COMPLETE) { - spin_lock(&call->state_lock); - ret = __rxrpc_set_call_completion(call, compl, abort_code, error); - spin_unlock(&call->state_lock); - } - return ret; -} - -/* - * Record that a call successfully completed. - */ -bool __rxrpc_call_completed(struct rxrpc_call *call) -{ - return __rxrpc_set_call_completion(call, RXRPC_CALL_SUCCEEDED, 0, 0); -} - -bool rxrpc_call_completed(struct rxrpc_call *call) -{ - bool ret = false; - - if (call->state < RXRPC_CALL_COMPLETE) { - spin_lock(&call->state_lock); - ret = __rxrpc_call_completed(call); - spin_unlock(&call->state_lock); - } - return ret; -} - -/* - * Record that a call is locally aborted. - */ -bool __rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, - u32 abort_code, int error, - enum rxrpc_abort_reason why) -{ - trace_rxrpc_abort(call->debug_id, why, call->cid, call->call_id, seq, - abort_code, error); - return __rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED, - abort_code, error); -} - -bool rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, - u32 abort_code, int error, - enum rxrpc_abort_reason why) -{ - bool ret; - - spin_lock(&call->state_lock); - ret = __rxrpc_abort_call(call, seq, abort_code, error, why); - spin_unlock(&call->state_lock); - if (ret) - rxrpc_send_abort_packet(call); - return ret; -} - /* * Pass a call terminating message to userspace. */ From patchwork Tue Dec 6 16:01:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30416 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2908674wrr; Tue, 6 Dec 2022 08:08:49 -0800 (PST) X-Google-Smtp-Source: AA0mqf5QQRz0ELAKSPHVBzz/9+U01ksYh7A7kWC4lDeagkU/Me/oy1M8eGfa5UfkAang2TBkZHut X-Received: by 2002:a63:4b41:0:b0:477:fbed:369b with SMTP id k1-20020a634b41000000b00477fbed369bmr43753137pgl.57.1670342929015; Tue, 06 Dec 2022 08:08:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342929; cv=none; d=google.com; s=arc-20160816; b=0k+I5gDQN7TdPJnzshGseaIbp9S8QNTvZv8dwYMJJv4amH/cDU5EOULhq1AAN77XtY /QFMUzD3IMdgUDQvhAaWhUcQfXR8gy/Vcqwf8714L6L1HthiMimqzji4RMw7mkzJJ94F z2mHskqL4ZIv7H3N2Jg8jNfBKDO/ugqQsBTCs/04/4j/gk5O/Jc4llwxNcSC6Z41uV3I MbHHKDXq63q5fS8XEZUzvZwNj5XmLq6rGxfqTczZmopSemIGJIxgt/AA1J+nMvbPHdNi 9k/iHjINdi327prlsAQB2+BSwvuNC0Aoor42Ghk5J55LlQcYpHLV5Bsj9tD0cs2JS04V buJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=airopPKH7qI2vgOAKPRFiw6JtU9nqZgkzAuVHUsBrmo=; b=IWKJeifWV1qyf46WLwa33L+gOjLUQlDpx+Op4xq4SplF+1TJou6I39wU1T2Fb4Tzp0 9AMkRfk8M/qnuYgpuYclWyUDGjGmqNMxUBCtRB18Dz0Zom3Mbydq6n0OZiK2grL25KZ3 5P9JxufvpqhgEWpqmJUxc5Gj5B7zkC7FYUi1P/EZl5cL1mArJirjG/8I8SRvS4ktO8rs k2UrYFS8yxUPQYlPvmULKCfTcRt73/whdMGuDZjsUnabTr5Csw5NmAD2Y8ESaixiBIMx IjZYDX8BqJBSyEL2h8DQuSjTUZ5LrPh1FSj2Nd3jBQBV0zPsoHROJN8Q1jFjq4k1REDv /vow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OK8vXKq0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i125-20020a625483000000b00565d37ee1casi16578550pfb.166.2022.12.06.08.08.31; Tue, 06 Dec 2022 08:08:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OK8vXKq0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235397AbiLFQFA (ORCPT + 99 others); Tue, 6 Dec 2022 11:05:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234387AbiLFQC7 (ORCPT ); Tue, 6 Dec 2022 11:02:59 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 233482F64C for ; Tue, 6 Dec 2022 08:01:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342518; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=airopPKH7qI2vgOAKPRFiw6JtU9nqZgkzAuVHUsBrmo=; b=OK8vXKq0Pxum5DMNgbguoe9fqXFj5c8pw+oUyUOifB4s2RYEeaDsFVQxWs4wBuAzK7P8Uc qUkvXDDTLnqaewk0mZmbS8I7wQ2LhG3BftQgCjQmPr8uEEkXU2F3UaaIl1PS8cCdYlZ4PW E/eadB/lUI8Ksc8U7NqVvr0bepS8EJU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-661-LSJmOnPzPkWqVsGBihEzsw-1; Tue, 06 Dec 2022 11:01:54 -0500 X-MC-Unique: LSJmOnPzPkWqVsGBihEzsw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F0D411C07578; Tue, 6 Dec 2022 16:01:53 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3E1BF1759E; Tue, 6 Dec 2022 16:01:53 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 23/32] rxrpc: Wrap accesses to get call state to put the barrier in one place From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:50 +0000 Message-ID: <167034251055.1105287.13232981553615424613.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481507439289807?= X-GMAIL-MSGID: =?utf-8?q?1751481507439289807?= Wrap accesses to get the state of a call from outside of the I/O thread in a single place so that the barrier needed to order wrt the error code and abort code is in just that place. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/af_rxrpc.c | 2 +- net/rxrpc/ar-internal.h | 16 ++++++++++++++++ net/rxrpc/recvmsg.c | 12 ++++++------ net/rxrpc/sendmsg.c | 22 +++++++++++----------- 4 files changed, 34 insertions(+), 18 deletions(-) diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 7446b7bd5490..74304f81ade7 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -379,7 +379,7 @@ EXPORT_SYMBOL(rxrpc_kernel_end_call); bool rxrpc_kernel_check_life(const struct socket *sock, const struct rxrpc_call *call) { - return call->state != RXRPC_CALL_COMPLETE; + return !rxrpc_call_is_complete(call); } EXPORT_SYMBOL(rxrpc_kernel_check_life); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index a95e161bd980..6a5552274dca 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -896,6 +896,22 @@ bool rxrpc_call_completed(struct rxrpc_call *); bool __rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); bool rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); +static inline enum rxrpc_call_state rxrpc_call_state(const struct rxrpc_call *call) +{ + /* Order read ->state before read ->error. */ + return smp_load_acquire(&call->state); +} + +static inline bool rxrpc_call_is_complete(const struct rxrpc_call *call) +{ + return rxrpc_call_state(call) == RXRPC_CALL_COMPLETE; +} + +static inline bool rxrpc_call_has_failed(const struct rxrpc_call *call) +{ + return rxrpc_call_is_complete(call) && call->completion != RXRPC_CALL_SUCCEEDED; +} + /* * conn_client.c */ diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index c163c103aa1d..44576d16cdb0 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -89,7 +89,7 @@ static int rxrpc_recvmsg_term(struct rxrpc_call *call, struct msghdr *msg) ret = put_cmsg(msg, SOL_RXRPC, RXRPC_LOCAL_ERROR, 4, &tmp); break; default: - pr_err("Invalid terminal call state %u\n", call->state); + pr_err("Invalid terminal call state %u\n", call->completion); BUG(); break; } @@ -111,7 +111,7 @@ static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) trace_rxrpc_receive(call, rxrpc_receive_end, 0, whigh); - if (call->state == RXRPC_CALL_CLIENT_RECV_REPLY) + if (rxrpc_call_state(call) == RXRPC_CALL_CLIENT_RECV_REPLY) rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_terminal_ack); spin_lock(&call->state_lock); @@ -210,7 +210,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, rx_pkt_offset = call->rx_pkt_offset; rx_pkt_len = call->rx_pkt_len; - if (call->state >= RXRPC_CALL_SERVER_ACK_REQUEST) { + if (rxrpc_call_state(call) >= RXRPC_CALL_SERVER_ACK_REQUEST) { seq = call->ackr_window - 1; ret = 1; goto done; @@ -414,7 +414,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, msg->msg_namelen = len; } - switch (READ_ONCE(call->state)) { + switch (rxrpc_call_state(call)) { case RXRPC_CALL_CLIENT_RECV_REPLY: case RXRPC_CALL_SERVER_RECV_REQUEST: case RXRPC_CALL_SERVER_ACK_REQUEST: @@ -434,7 +434,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, if (ret < 0) goto error_unlock_call; - if (call->state == RXRPC_CALL_COMPLETE) { + if (rxrpc_call_is_complete(call)) { ret = rxrpc_recvmsg_term(call, msg); if (ret < 0) goto error_unlock_call; @@ -514,7 +514,7 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call, mutex_lock(&call->user_mutex); - switch (READ_ONCE(call->state)) { + switch (rxrpc_call_state(call)) { case RXRPC_CALL_CLIENT_RECV_REPLY: case RXRPC_CALL_SERVER_RECV_REQUEST: case RXRPC_CALL_SERVER_ACK_REQUEST: diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 51c676bf03a8..1f047ec27316 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -25,7 +25,7 @@ bool rxrpc_propose_abort(struct rxrpc_call *call, s32 abort_code, int error, { _enter("{%d},%d,%d,%u", call->debug_id, abort_code, error, why); - if (!call->send_abort && call->state < RXRPC_CALL_COMPLETE) { + if (!call->send_abort && !rxrpc_call_is_complete(call)) { call->send_abort_why = why; call->send_abort_err = error; call->send_abort_seq = 0; @@ -59,7 +59,7 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx, if (rxrpc_check_tx_space(call, NULL)) return 0; - if (call->state >= RXRPC_CALL_COMPLETE) + if (rxrpc_call_is_complete(call)) return call->error; if (signal_pending(current)) @@ -94,7 +94,7 @@ static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx, if (rxrpc_check_tx_space(call, &tx_win)) return 0; - if (call->state >= RXRPC_CALL_COMPLETE) + if (rxrpc_call_is_complete(call)) return call->error; if (timeout == 0 && @@ -123,7 +123,7 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, if (rxrpc_check_tx_space(call, NULL)) return 0; - if (call->state >= RXRPC_CALL_COMPLETE) + if (rxrpc_call_is_complete(call)) return call->error; trace_rxrpc_txqueue(call, rxrpc_txqueue_wait); @@ -272,7 +272,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, ret = -EPIPE; if (sk->sk_shutdown & SEND_SHUTDOWN) goto maybe_error; - state = READ_ONCE(call->state); + state = rxrpc_call_state(call); ret = -ESHUTDOWN; if (state >= RXRPC_CALL_COMPLETE) goto maybe_error; @@ -349,7 +349,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, /* check for the far side aborting the call or a network error * occurring */ - if (call->state == RXRPC_CALL_COMPLETE) + if (rxrpc_call_is_complete(call)) goto call_terminated; /* add the packet to the send queue if it's now full */ @@ -374,7 +374,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, success: ret = copied; - if (smp_load_acquire(&call->state) == RXRPC_CALL_COMPLETE && + if (rxrpc_call_is_complete(call) && call->error < 0) ret = call->error; out: @@ -614,10 +614,10 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) return PTR_ERR(call); /* ... and we have the call lock. */ ret = 0; - if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) + if (rxrpc_call_is_complete(call)) goto out_put_unlock; } else { - switch (READ_ONCE(call->state)) { + switch (rxrpc_call_state(call)) { case RXRPC_CALL_UNINITIALISED: case RXRPC_CALL_CLIENT_AWAIT_CONN: case RXRPC_CALL_SERVER_PREALLOC: @@ -671,7 +671,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) break; } - state = READ_ONCE(call->state); + state = rxrpc_call_state(call); _debug("CALL %d USR %lx ST %d on CONN %p", call->debug_id, call->user_call_ID, state, call->conn); @@ -731,7 +731,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, _debug("CALL %d USR %lx ST %d on CONN %p", call->debug_id, call->user_call_ID, call->state, call->conn); - switch (smp_load_acquire(&call->state)) { + switch (rxrpc_call_state(call)) { case RXRPC_CALL_CLIENT_SEND_REQUEST: case RXRPC_CALL_SERVER_ACK_REQUEST: case RXRPC_CALL_SERVER_SEND_REPLY: From patchwork Tue Dec 6 16:01:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30415 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2908627wrr; Tue, 6 Dec 2022 08:08:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf5FH/vbAIRB1CmlLU+RozlJKnhOC6omCNBkEyz+yzXXmbm3j+qfKUhPyczXOAPkvZT4cdSr X-Received: by 2002:a05:6402:1107:b0:463:9b53:cbf6 with SMTP id u7-20020a056402110700b004639b53cbf6mr71127232edv.173.1670342924736; Tue, 06 Dec 2022 08:08:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342924; cv=none; d=google.com; s=arc-20160816; b=mf8eJEp6ysvlt1rSLcVlNJPFJMWULFqx6VP9fdhMZlm2Hq8qRhrI7oWuEfnGUMZF5/ 2MvgIeMxs39Mq3tWFR0M3yzKqaSndhuPJ/NDxApjs77hraOVwsXLGsW72OH/3v/SgKI4 SsWDJy4RKsc6DWrxegDWaXLMOeJquK+0VAlIvkUmMWOoiIuHb9KUnwGeuHDqhDm0+3ZM HFvuquWwB/DpagiDOi8erm3XWEpWYpOgd/532cQXaEDXcLhTQ6zMf9HNoVGuAo208pyY ke9g0G6zp5s4/YM2WIdQoFY0aQeudMO4tDiW9DtOtZNakaEaiUvpCbo9gx3pLAWspDBk P1Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=oMG6wpF6Uq5coibCZCRA8/IUhr6LChs9aYTxuUUCX0E=; b=fdzM4tHXbyZotCbBerJY1EPC6x/TCoPulL27ExTtHEx//qydpKIi39Hblwb3XnrQwu qvq+rjo0mhHGjsC0aQyT2y/FiMUuOnAz/L18oaYVm0+Yd6Z8cZAe5KDioomiuGWHa9G5 a+RgpBrl77gESbuzbGyp5CoUD/jHjCn33rZtrQaruXhMs5gWcKiNH/LDlE7rOgOaNw4y dVvjT0V5HGv3hk3JQRKnZor12CCRJKYnyz0twgqo7CyUcUOiTxs84KHUJC4xJVOiQKog y9Xn0mElEsr/2Cz2f42uzktyW7OlXkftUL+QwBcL0FZFYvQOWNZBcyc3Wnytm8hgCxkA LZXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=EV9HEPEc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h3-20020a056402280300b0046b6da7e857si2504128ede.313.2022.12.06.08.08.19; Tue, 06 Dec 2022 08:08:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=EV9HEPEc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235414AbiLFQFM (ORCPT + 99 others); Tue, 6 Dec 2022 11:05:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234952AbiLFQDK (ORCPT ); Tue, 6 Dec 2022 11:03:10 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDB8C31DE7 for ; Tue, 6 Dec 2022 08:02:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342528; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oMG6wpF6Uq5coibCZCRA8/IUhr6LChs9aYTxuUUCX0E=; b=EV9HEPEcYxr3mJi6rZ8RNoPTfP0ND1BUorKvOoHk+iGKaCQaWXeD3kAu3X+z9Gzo0tolKJ cv+rHVFLPId0kBfK8fY8waAR0Kn17yCbUfMc/u3tUSa2/UIAd15YgPsUmC0bu6EmH37Yxs IbwdatU5p3SXfHmYzHrtL7c+/whnISE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-533-8zUlKcFdNtycFLDWLZlBew-1; Tue, 06 Dec 2022 11:02:02 -0500 X-MC-Unique: 8zUlKcFdNtycFLDWLZlBew-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 703BB1C0754E; Tue, 6 Dec 2022 16:02:02 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id B143D1410DD9; Tue, 6 Dec 2022 16:02:01 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 24/32] rxrpc: Move call state changes from sendmsg to I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:01:59 +0000 Message-ID: <167034251913.1105287.1566401510012063911.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481502625342588?= X-GMAIL-MSGID: =?utf-8?q?1751481502625342588?= Move all the call state changes that are made in rxrpc_sendmsg() to the I/O thread. This is a step towards removing the call state lock. This requires the switch to the RXRPC_CALL_CLIENT_AWAIT_REPLY and RXRPC_CALL_SERVER_SEND_REPLY states to be done when the last packet is decanted from ->tx_sendmsg to ->tx_buffer in the I/O thread, not when it is added to ->tx_sendmsg by sendmsg(). Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- Documentation/networking/rxrpc.rst | 4 +- net/rxrpc/call_event.c | 50 ++++++++++++++++++++++++++- net/rxrpc/sendmsg.c | 67 +++++++----------------------------- 3 files changed, 63 insertions(+), 58 deletions(-) diff --git a/Documentation/networking/rxrpc.rst b/Documentation/networking/rxrpc.rst index 39494a6ea739..e1af54424192 100644 --- a/Documentation/networking/rxrpc.rst +++ b/Documentation/networking/rxrpc.rst @@ -880,8 +880,8 @@ The kernel interface functions are as follows: notify_end_rx can be NULL or it can be used to specify a function to be called when the call changes state to end the Tx phase. This function is - called with the call-state spinlock held to prevent any reply or final ACK - from being delivered first. + called with a spinlock held to prevent the last DATA packet from being + transmitted until the function returns. (#) Receive data from a call:: diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index b41ffb0a69e9..67cc46b83e37 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -251,6 +251,50 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) _leave(""); } +/* + * Start transmitting the reply to a service. This cancels the need to ACK the + * request if we haven't yet done so. + */ +static void rxrpc_begin_service_reply(struct rxrpc_call *call) +{ + unsigned long now; + + spin_lock(&call->state_lock); + + if (call->state == RXRPC_CALL_SERVER_ACK_REQUEST) { + now = jiffies; + call->state = RXRPC_CALL_SERVER_SEND_REPLY; + WRITE_ONCE(call->delay_ack_at, now + MAX_JIFFY_OFFSET); + if (call->ackr_reason == RXRPC_ACK_DELAY) + call->ackr_reason = 0; + trace_rxrpc_timer(call, rxrpc_timer_init_for_send_reply, now); + } + + spin_unlock(&call->state_lock); +} + +/* + * Close the transmission phase. After this point there is no more data to be + * transmitted in the call. + */ +static void rxrpc_close_tx_phase(struct rxrpc_call *call) +{ + _debug("________awaiting reply/ACK__________"); + + spin_lock(&call->state_lock); + switch (call->state) { + case RXRPC_CALL_CLIENT_SEND_REQUEST: + call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY; + break; + case RXRPC_CALL_SERVER_SEND_REPLY: + call->state = RXRPC_CALL_SERVER_AWAIT_ACK; + break; + default: + break; + } + spin_unlock(&call->state_lock); +} + static bool rxrpc_tx_window_has_space(struct rxrpc_call *call) { unsigned int winsize = min_t(unsigned int, call->tx_winsize, @@ -283,6 +327,9 @@ static void rxrpc_decant_prepared_tx(struct rxrpc_call *call) call->tx_top = txb->seq; list_add_tail(&txb->call_link, &call->tx_buffer); + if (txb->wire.flags & RXRPC_LAST_PACKET) + rxrpc_close_tx_phase(call); + rxrpc_transmit_one(call, txb); if (!rxrpc_tx_window_has_space(call)) @@ -296,12 +343,11 @@ static void rxrpc_transmit_some_data(struct rxrpc_call *call) case RXRPC_CALL_SERVER_ACK_REQUEST: if (list_empty(&call->tx_sendmsg)) return; + rxrpc_begin_service_reply(call); fallthrough; case RXRPC_CALL_SERVER_SEND_REPLY: - case RXRPC_CALL_SERVER_AWAIT_ACK: case RXRPC_CALL_CLIENT_SEND_REQUEST: - case RXRPC_CALL_CLIENT_AWAIT_REPLY: if (!rxrpc_tx_window_has_space(call)) return; if (list_empty(&call->tx_sendmsg)) { diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 1f047ec27316..5b17ee1cbfbf 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -188,7 +188,6 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, struct rxrpc_txbuf *txb, rxrpc_notify_end_tx_t notify_end_tx) { - unsigned long now; rxrpc_seq_t seq = txb->seq; bool last = test_bit(RXRPC_TXBUF_LAST, &txb->flags), poke; @@ -211,36 +210,10 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, poke = list_empty(&call->tx_sendmsg); list_add_tail(&txb->call_link, &call->tx_sendmsg); call->tx_prepared = seq; + if (last) + rxrpc_notify_end_tx(rx, call, notify_end_tx); spin_unlock(&call->tx_lock); - if (last || call->state == RXRPC_CALL_SERVER_ACK_REQUEST) { - _debug("________awaiting reply/ACK__________"); - spin_lock(&call->state_lock); - switch (call->state) { - case RXRPC_CALL_CLIENT_SEND_REQUEST: - call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY; - rxrpc_notify_end_tx(rx, call, notify_end_tx); - break; - case RXRPC_CALL_SERVER_ACK_REQUEST: - call->state = RXRPC_CALL_SERVER_SEND_REPLY; - now = jiffies; - WRITE_ONCE(call->delay_ack_at, now + MAX_JIFFY_OFFSET); - if (call->ackr_reason == RXRPC_ACK_DELAY) - call->ackr_reason = 0; - trace_rxrpc_timer(call, rxrpc_timer_init_for_send_reply, now); - if (!last) - break; - fallthrough; - case RXRPC_CALL_SERVER_SEND_REPLY: - call->state = RXRPC_CALL_SERVER_AWAIT_ACK; - rxrpc_notify_end_tx(rx, call, notify_end_tx); - break; - default: - break; - } - spin_unlock(&call->state_lock); - } - if (poke) rxrpc_poke_call(call, rxrpc_call_poke_start); } @@ -279,8 +252,13 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, ret = -EPROTO; if (state != RXRPC_CALL_CLIENT_SEND_REQUEST && state != RXRPC_CALL_SERVER_ACK_REQUEST && - state != RXRPC_CALL_SERVER_SEND_REPLY) + state != RXRPC_CALL_SERVER_SEND_REPLY) { + /* Request phase complete for this client call */ + trace_rxrpc_abort(call->debug_id, rxrpc_sendmsg_late_send, + call->cid, call->call_id, call->rx_consumed, + 0, -EPROTO); goto maybe_error; + } ret = -EMSGSIZE; if (call->tx_total_len != -1) { @@ -572,7 +550,6 @@ rxrpc_new_client_call_for_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) __releases(&rx->sk.sk_lock.slock) { - enum rxrpc_call_state state; struct rxrpc_call *call; unsigned long now, j; bool dropped_lock = false; @@ -671,11 +648,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) break; } - state = rxrpc_call_state(call); - _debug("CALL %d USR %lx ST %d on CONN %p", - call->debug_id, call->user_call_ID, state, call->conn); - - if (state >= RXRPC_CALL_COMPLETE) { + if (rxrpc_call_is_complete(call)) { /* it's too late for this call */ ret = -ESHUTDOWN; } else if (p.command == RXRPC_CMD_SEND_ABORT) { @@ -721,7 +694,7 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, bool dropped_lock = false; int ret; - _enter("{%d,%s},", call->debug_id, rxrpc_call_states[call->state]); + _enter("{%d},", call->debug_id); ASSERTCMP(msg->msg_name, ==, NULL); ASSERTCMP(msg->msg_control, ==, NULL); @@ -731,24 +704,10 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, _debug("CALL %d USR %lx ST %d on CONN %p", call->debug_id, call->user_call_ID, call->state, call->conn); - switch (rxrpc_call_state(call)) { - case RXRPC_CALL_CLIENT_SEND_REQUEST: - case RXRPC_CALL_SERVER_ACK_REQUEST: - case RXRPC_CALL_SERVER_SEND_REPLY: - ret = rxrpc_send_data(rxrpc_sk(sock->sk), call, msg, len, - notify_end_tx, &dropped_lock); - break; - case RXRPC_CALL_COMPLETE: + ret = rxrpc_send_data(rxrpc_sk(sock->sk), call, msg, len, + notify_end_tx, &dropped_lock); + if (ret == -ESHUTDOWN) ret = call->error; - break; - default: - /* Request phase complete for this client call */ - trace_rxrpc_abort(call->debug_id, rxrpc_sendmsg_late_send, - call->cid, call->call_id, call->rx_consumed, - 0, -EPROTO); - ret = -EPROTO; - break; - } if (!dropped_lock) mutex_unlock(&call->user_mutex); From patchwork Tue Dec 6 16:02:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30419 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2908901wrr; Tue, 6 Dec 2022 08:09:08 -0800 (PST) X-Google-Smtp-Source: AA0mqf6fFMdrT2nSX+8cw/Mdr6G67rXOw7tJR5+nUSZ6VLIKnNYuA5gHqqjE4vfsKA219TiYizBg X-Received: by 2002:a17:902:e3ca:b0:189:a11e:9994 with SMTP id r10-20020a170902e3ca00b00189a11e9994mr34743920ple.54.1670342948081; Tue, 06 Dec 2022 08:09:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342948; cv=none; d=google.com; s=arc-20160816; b=Glo4CCXKxsxVQ/7075wblQSA/AwAwGGeKsRm0njT3d+G91iDxnobmzy2PXZSe1ttwZ RTTeQQvRxvyq21ydVkT8W3m4jhOnu3NXtPgAOfJyHZXKUbC0dE+FshtfDeFzd6DnHXkH hrOJqhBTaqjV+twFY/FQYngkm/MoiOnBY+iQuRv5JLQ5t25KF5sF7uLyUEgKYcOqxI5L QVxmxhZoEuKXDvn0CX8fa5Ck80ghZMxcWblbmAqcHRN9qsL28zei5NC0mthVlyPgdoZ/ mi8x3KRbg0OAr46d5VVxqQu5WUV/EFsPE96kU1vD67HEkej3gPue2tB0vi29ZrEcJvuh bOPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=QJxJ6nTOVr7Gy43WamAHi0hFK2FQROOHVi+eeAx3fds=; b=HMf1bY6ELhqM+VvSaTJPBLdC7DqwpYFikgqry/USvR6PTMybcU8xZZYrmJ5mHTd0wj LF7he7VxwcsovkXKotTJXaRTqFPmWb6ti7C2DkeEvPZeg0tZYVa8JF8nw0mJnHxbctFf +Te5TwF7uSc7m73JC8rpSqh5MhdXyu8DD04zE9H0Tl58qcnRThLZbqw7OcwYTETiWHF3 YQSQHqdkng6x7o3VHZhC+wqtmv55oI0HlYJNgwTeK1R1hpKMLRhZhlUz1keMmKholNfi 3bQvdX9GbyeeeLcT7PK8RPbja2C5IEnt/PAdpuXRu4cW1JCuDcYHuqOkSXpPjGFGFtmn relQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FSJwqipW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 82-20020a630055000000b004785d1cf6c4si17412521pga.380.2022.12.06.08.08.49; Tue, 06 Dec 2022 08:09:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FSJwqipW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235335AbiLFQFg (ORCPT + 99 others); Tue, 6 Dec 2022 11:05:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235354AbiLFQES (ORCPT ); Tue, 6 Dec 2022 11:04:18 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 859BB6253 for ; Tue, 6 Dec 2022 08:02:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342535; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QJxJ6nTOVr7Gy43WamAHi0hFK2FQROOHVi+eeAx3fds=; b=FSJwqipWhiqka8s3pfN7pPxladrRE75g7HGOFAgkA5rrA9gdZzhul2TumSA4Y8ut4WKqy2 j/igt/rqcqORI9KTn1fpKLhhb5/pOby6Z4Q+fVnG38l0MxQDH1IRdQLz0VTFaYDtv77aNu yvJifpUb30kzC1FcfGiJouiM0GvThjo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-501-lNiyyTCJMAemoyBvNA02qw-1; Tue, 06 Dec 2022 11:02:12 -0500 X-MC-Unique: lNiyyTCJMAemoyBvNA02qw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 25413857D0E; Tue, 6 Dec 2022 16:02:11 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4A2EE4A9254; Tue, 6 Dec 2022 16:02:10 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 25/32] rxrpc: Move call state changes from recvmsg to I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:02:07 +0000 Message-ID: <167034252760.1105287.5705932376803769848.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481527086097534?= X-GMAIL-MSGID: =?utf-8?q?1751481527086097534?= Move the call state changes that are made in rxrpc_recvmsg() to the I/O thread. This means that, thenceforth, only the I/O thread does this and the call state lock can be removed. This requires the Rx phase to be ended when the last packet is received, not when it is processed. Since this now changes the rxrpc call state to SUCCEEDED before we've consumed all the data from it, rxrpc_kernel_check_life() mustn't say the call is dead until the recvmsg queue is empty (unless the call has failed). Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- fs/afs/rxrpc.c | 1 net/rxrpc/af_rxrpc.c | 10 ++- net/rxrpc/ar-internal.h | 3 + net/rxrpc/input.c | 38 ++++++++++- net/rxrpc/recvmsg.c | 168 +++++++++++++++++------------------------------ 5 files changed, 109 insertions(+), 111 deletions(-) diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index cfff7f9ccc33..1224cc4566d0 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -909,6 +909,7 @@ int afs_extract_data(struct afs_call *call, bool want_more) ret = rxrpc_kernel_recv_data(net->socket, call->rxcall, iter, &call->iov_len, want_more, &remote_abort, &call->service_id); + trace_afs_receive_data(call, call->iter, want_more, ret); if (ret == 0 || ret == -EAGAIN) return ret; diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 74304f81ade7..6f790d48e51e 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -373,13 +373,17 @@ EXPORT_SYMBOL(rxrpc_kernel_end_call); * @sock: The socket the call is on * @call: The call to check * - * Allow a kernel service to find out whether a call is still alive - - * ie. whether it has completed. + * Allow a kernel service to find out whether a call is still alive - whether + * it has completed successfully and all received data has been consumed. */ bool rxrpc_kernel_check_life(const struct socket *sock, const struct rxrpc_call *call) { - return !rxrpc_call_is_complete(call); + if (!rxrpc_call_is_complete(call)) + return true; + if (call->completion != RXRPC_CALL_SUCCEEDED) + return false; + return !skb_queue_empty(&call->recvmsg_queue); } EXPORT_SYMBOL(rxrpc_kernel_check_life); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 6a5552274dca..beb5fbee2efd 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -546,7 +546,8 @@ enum rxrpc_call_flag { RXRPC_CALL_KERNEL, /* The call was made by the kernel */ RXRPC_CALL_UPGRADE, /* Service upgrade was requested for the call */ RXRPC_CALL_EXCLUSIVE, /* The call uses a once-only connection */ - RXRPC_CALL_RX_IS_IDLE, /* Reception is idle - send an ACK */ + RXRPC_CALL_RX_IS_IDLE, /* recvmsg() is idle - send an ACK */ + RXRPC_CALL_RECVMSG_READ_ALL, /* recvmsg() read all of the received data */ }; /* diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index dce440600f1a..b0ac846b16fc 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -319,6 +319,41 @@ static bool rxrpc_receiving_reply(struct rxrpc_call *call) return true; } +/* + * End the packet reception phase. + */ +static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) +{ + rxrpc_seq_t whigh = READ_ONCE(call->rx_highest_seq); + + _enter("%d,%s", call->debug_id, rxrpc_call_states[call->state]); + + trace_rxrpc_receive(call, rxrpc_receive_end, 0, whigh); + + if (rxrpc_call_state(call) == RXRPC_CALL_CLIENT_RECV_REPLY) + rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_terminal_ack); + + spin_lock(&call->state_lock); + + switch (call->state) { + case RXRPC_CALL_CLIENT_RECV_REPLY: + __rxrpc_call_completed(call); + spin_unlock(&call->state_lock); + break; + + case RXRPC_CALL_SERVER_RECV_REQUEST: + call->state = RXRPC_CALL_SERVER_ACK_REQUEST; + call->expect_req_by = jiffies + MAX_JIFFY_OFFSET; + spin_unlock(&call->state_lock); + rxrpc_propose_delay_ACK(call, serial, + rxrpc_propose_ack_processing_op); + break; + default: + spin_unlock(&call->state_lock); + break; + } +} + static void rxrpc_input_update_ack_window(struct rxrpc_call *call, rxrpc_seq_t window, rxrpc_seq_t wtop) { @@ -338,8 +373,9 @@ static void rxrpc_input_queue_data(struct rxrpc_call *call, struct sk_buff *skb, __skb_queue_tail(&call->recvmsg_queue, skb); rxrpc_input_update_ack_window(call, window, wtop); - trace_rxrpc_receive(call, last ? why + 1 : why, sp->hdr.serial, sp->hdr.seq); + if (last) + rxrpc_end_rx_phase(call, sp->hdr.serial); } /* diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 44576d16cdb0..1465ad3e691b 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -100,42 +100,6 @@ static int rxrpc_recvmsg_term(struct rxrpc_call *call, struct msghdr *msg) return ret; } -/* - * End the packet reception phase. - */ -static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) -{ - rxrpc_seq_t whigh = READ_ONCE(call->rx_highest_seq); - - _enter("%d,%s", call->debug_id, rxrpc_call_states[call->state]); - - trace_rxrpc_receive(call, rxrpc_receive_end, 0, whigh); - - if (rxrpc_call_state(call) == RXRPC_CALL_CLIENT_RECV_REPLY) - rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_terminal_ack); - - spin_lock(&call->state_lock); - - switch (call->state) { - case RXRPC_CALL_CLIENT_RECV_REPLY: - __rxrpc_call_completed(call); - spin_unlock(&call->state_lock); - rxrpc_poke_call(call, rxrpc_call_poke_complete); - break; - - case RXRPC_CALL_SERVER_RECV_REQUEST: - call->state = RXRPC_CALL_SERVER_ACK_REQUEST; - call->expect_req_by = jiffies + MAX_JIFFY_OFFSET; - spin_unlock(&call->state_lock); - rxrpc_propose_delay_ACK(call, serial, - rxrpc_propose_ack_processing_op); - break; - default: - spin_unlock(&call->state_lock); - break; - } -} - /* * Discard a packet we've used up and advance the Rx window by one. */ @@ -166,10 +130,9 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) trace_rxrpc_receive(call, last ? rxrpc_receive_rotate_last : rxrpc_receive_rotate, serial, call->rx_consumed); - if (last) { - rxrpc_end_rx_phase(call, serial); - return; - } + + if (last) + set_bit(RXRPC_CALL_RECVMSG_READ_ALL, &call->flags); /* Check to see if there's an ACK that needs sending. */ acked = atomic_add_return(call->rx_consumed - old_consumed, @@ -194,7 +157,8 @@ static int rxrpc_verify_data(struct rxrpc_call *call, struct sk_buff *skb) /* * Deliver messages to a call. This keeps processing packets until the buffer * is filled and we find either more DATA (returns 0) or the end of the DATA - * (returns 1). If more packets are required, it returns -EAGAIN. + * (returns 1). If more packets are required, it returns -EAGAIN and if the + * call has failed it returns -EIO. */ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, struct msghdr *msg, struct iov_iter *iter, @@ -210,7 +174,13 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, rx_pkt_offset = call->rx_pkt_offset; rx_pkt_len = call->rx_pkt_len; - if (rxrpc_call_state(call) >= RXRPC_CALL_SERVER_ACK_REQUEST) { + if (rxrpc_call_has_failed(call)) { + seq = call->ackr_window - 1; + ret = -EIO; + goto done; + } + + if (test_bit(RXRPC_CALL_RECVMSG_READ_ALL, &call->flags)) { seq = call->ackr_window - 1; ret = 1; goto done; @@ -234,14 +204,15 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call, if (rx_pkt_offset == 0) { ret2 = rxrpc_verify_data(call, skb); - rx_pkt_offset = sp->offset; - rx_pkt_len = sp->len; trace_rxrpc_recvdata(call, rxrpc_recvmsg_next, seq, - rx_pkt_offset, rx_pkt_len, ret2); + sp->offset, sp->len, ret2); if (ret2 < 0) { + kdebug("verify = %d", ret2); ret = ret2; goto out; } + rx_pkt_offset = sp->offset; + rx_pkt_len = sp->len; } else { trace_rxrpc_recvdata(call, rxrpc_recvmsg_cont, seq, rx_pkt_offset, rx_pkt_len, 0); @@ -414,36 +385,36 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, msg->msg_namelen = len; } - switch (rxrpc_call_state(call)) { - case RXRPC_CALL_CLIENT_RECV_REPLY: - case RXRPC_CALL_SERVER_RECV_REQUEST: - case RXRPC_CALL_SERVER_ACK_REQUEST: - ret = rxrpc_recvmsg_data(sock, call, msg, &msg->msg_iter, len, - flags, &copied); - if (ret == -EAGAIN) - ret = 0; - - if (!skb_queue_empty(&call->recvmsg_queue)) - rxrpc_notify_socket(call); - break; - default: + ret = rxrpc_recvmsg_data(sock, call, msg, &msg->msg_iter, len, + flags, &copied); + if (ret == -EAGAIN) ret = 0; - break; - } - + if (ret == -EIO) + goto call_failed; if (ret < 0) goto error_unlock_call; - if (rxrpc_call_is_complete(call)) { - ret = rxrpc_recvmsg_term(call, msg); - if (ret < 0) - goto error_unlock_call; - if (!(flags & MSG_PEEK)) - rxrpc_release_call(rx, call); - msg->msg_flags |= MSG_EOR; - ret = 1; - } + if (rxrpc_call_is_complete(call) && + skb_queue_empty(&call->recvmsg_queue)) + goto call_complete; + if (rxrpc_call_has_failed(call)) + goto call_failed; + rxrpc_notify_socket(call); + goto not_yet_complete; + +call_failed: + rxrpc_purge_queue(&call->recvmsg_queue); +call_complete: + ret = rxrpc_recvmsg_term(call, msg); + if (ret < 0) + goto error_unlock_call; + if (!(flags & MSG_PEEK)) + rxrpc_release_call(rx, call); + msg->msg_flags |= MSG_EOR; + ret = 1; + +not_yet_complete: if (ret == 0) msg->msg_flags |= MSG_MORE; else @@ -506,49 +477,34 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call, size_t offset = 0; int ret; - _enter("{%d,%s},%zu,%d", - call->debug_id, rxrpc_call_states[call->state], - *_len, want_more); - - ASSERTCMP(call->state, !=, RXRPC_CALL_SERVER_SECURING); + _enter("{%d},%zu,%d", call->debug_id, *_len, want_more); mutex_lock(&call->user_mutex); - switch (rxrpc_call_state(call)) { - case RXRPC_CALL_CLIENT_RECV_REPLY: - case RXRPC_CALL_SERVER_RECV_REQUEST: - case RXRPC_CALL_SERVER_ACK_REQUEST: - ret = rxrpc_recvmsg_data(sock, call, NULL, iter, - *_len, 0, &offset); - *_len -= offset; - if (ret < 0) - goto out; - - /* We can only reach here with a partially full buffer if we - * have reached the end of the data. We must otherwise have a - * full buffer or have been given -EAGAIN. - */ - if (ret == 1) { - if (iov_iter_count(iter) > 0) - goto short_data; - if (!want_more) - goto read_phase_complete; - ret = 0; - goto out; - } - - if (!want_more) - goto excess_data; + ret = rxrpc_recvmsg_data(sock, call, NULL, iter, *_len, 0, &offset); + *_len -= offset; + if (ret == -EIO) + goto call_failed; + if (ret < 0) goto out; - case RXRPC_CALL_COMPLETE: - goto call_complete; - - default: - ret = -EINPROGRESS; + /* We can only reach here with a partially full buffer if we have + * reached the end of the data. We must otherwise have a full buffer + * or have been given -EAGAIN. + */ + if (ret == 1) { + if (iov_iter_count(iter) > 0) + goto short_data; + if (!want_more) + goto read_phase_complete; + ret = 0; goto out; } + if (!want_more) + goto excess_data; + goto out; + read_phase_complete: ret = 1; out: @@ -570,7 +526,7 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call, 0, -EMSGSIZE); ret = -EMSGSIZE; goto out; -call_complete: +call_failed: *_abort = call->abort_code; ret = call->error; if (call->completion == RXRPC_CALL_SUCCEEDED) { From patchwork Tue Dec 6 16:02:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30417 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2908675wrr; Tue, 6 Dec 2022 08:08:49 -0800 (PST) X-Google-Smtp-Source: AA0mqf5RuAES+RLbz8osuBRblst78g0B9BqlcLQktsgMebJB1Soe7g2Hk2njRZ+EjoU+FApKNO9t X-Received: by 2002:a05:6a00:2348:b0:577:3125:f5aa with SMTP id j8-20020a056a00234800b005773125f5aamr5327639pfj.65.1670342929065; Tue, 06 Dec 2022 08:08:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342929; cv=none; d=google.com; s=arc-20160816; b=AGisllL56sssbEv0rH9L28juc6RJUOjnTZ6rtK0VFTRBl8OUCpA9znke24eA2G7D5w Jdy6+CEntVNrQBRpqD0pMEYHDEi6dffQCcPEaKqeHtl2kP/ek6vnwEaPpQEf5hSn84SQ oMxRJIwXed8WtVavjKRcyUSMm14vYVKRYt0EKFGjp5dCwD0RDI1zTpbQY15o/p8zCjxQ V03QXagWB7aDKvHpUVCQ+QfaJow8sJPxtmgvfm6nt8RF4D1VcZLcaaI2n0tE3UZYMGlr MFm2jurJnTw3tGClDok1UWlX/2ipnRqiZlT+5MovfVDYxKt/vyObauTHnknpuGspNVgV 8Ubw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=H29b7ph8VK7g8wCvxfLievzGCSiJMpS/VTlYsmzZBGA=; b=jm21prhzw7ZddWNsQgZAMbejF6na81J9gLlsieOe9O73+TA4vxZ7o0DNcMdyCxARr3 rZUpv+7uRqq//1ct+dI1rSepPbx3eGDbodZRAQjyB3zAA06BKswwCV2tSzJZmB8XAR0M Mhy7XvB8ftq0AuVVWZtjcDAxLu5ej0qK0hMAl5TGL8c5Mcaa7kwlvjjdWnhoxMnPYJ8C XbZck9EbTgOr7ePhnhH/oIu5xrd5GRz6HHvTiX5jTadPs1qMl9qm2y7W54tXmO8WN63s Xhh6An1s2huC3LpppwNL3cGm0uQ2kW0N6Ch9hQmORHBx2+7yyhKmcEdlJzlUQDEr+vKC jFMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BRshCAZ6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h34-20020a632122000000b00478eceb06a1si241701pgh.698.2022.12.06.08.08.33; Tue, 06 Dec 2022 08:08:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BRshCAZ6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234952AbiLFQHn (ORCPT + 99 others); Tue, 6 Dec 2022 11:07:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235448AbiLFQG6 (ORCPT ); Tue, 6 Dec 2022 11:06:58 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB90C63ED for ; Tue, 6 Dec 2022 08:02:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342551; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H29b7ph8VK7g8wCvxfLievzGCSiJMpS/VTlYsmzZBGA=; b=BRshCAZ6AHxGIBKQn/AcF0kLDRd7s56TOWt4PlvetoyuS4mWYEoN4cPjkQG/hRLUCz8UyN DYWpLjsj/DVBoDx7tyj68Uh4IEN849s5mCXDH0fjIYihAMDoctDFg8m+skEbBTQ4rdqN/N zZSkGHNtRN9FbV9M2zRchYhUe64EMgM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-593-r10NXM0GNpiaZr5ME33AAw-1; Tue, 06 Dec 2022 11:02:20 -0500 X-MC-Unique: r10NXM0GNpiaZr5ME33AAw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EC3B6833AEE; Tue, 6 Dec 2022 16:02:19 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 06E234A9254; Tue, 6 Dec 2022 16:02:18 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 26/32] rxrpc: Remove call->state_lock From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:02:16 +0000 Message-ID: <167034253629.1105287.9237093066090823645.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481507457594311?= X-GMAIL-MSGID: =?utf-8?q?1751481507457594311?= All the setters of call->state are now in the I/O thread and thus the state lock is now unnecessary. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 28 +++++++++++--- net/rxrpc/call_accept.c | 2 + net/rxrpc/call_event.c | 44 +++++++++------------- net/rxrpc/call_object.c | 31 ++++++++-------- net/rxrpc/call_state.c | 89 +++++++++++++++++---------------------------- net/rxrpc/conn_client.c | 10 ++--- net/rxrpc/conn_event.c | 11 ++---- net/rxrpc/input.c | 94 ++++++++++++++++++++--------------------------- net/rxrpc/output.c | 4 +- net/rxrpc/proc.c | 6 ++- net/rxrpc/rxkad.c | 2 + net/rxrpc/sendmsg.c | 3 -- 12 files changed, 142 insertions(+), 182 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index beb5fbee2efd..74f424d9001d 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -632,14 +632,13 @@ struct rxrpc_call { unsigned long flags; unsigned long events; spinlock_t notify_lock; /* Kernel notification lock */ - spinlock_t state_lock; /* lock for state transition */ unsigned int send_abort_why; /* Why the abort [enum rxrpc_abort_reason] */ s32 send_abort; /* Abort code to be sent */ short send_abort_err; /* Error to be associated with the abort */ rxrpc_seq_t send_abort_seq; /* DATA packet that incurred the abort (or 0) */ s32 abort_code; /* Local/remote abort code */ int error; /* Local error incurred */ - enum rxrpc_call_state state; /* current state of call */ + enum rxrpc_call_state _state; /* Current state of call (needs barrier) */ enum rxrpc_call_completion completion; /* Call completion condition */ refcount_t ref; u8 security_ix; /* Security type */ @@ -890,17 +889,32 @@ static inline bool rxrpc_is_client_call(const struct rxrpc_call *call) /* * call_state.c */ -bool __rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); bool rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); -bool __rxrpc_call_completed(struct rxrpc_call *); bool rxrpc_call_completed(struct rxrpc_call *); -bool __rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); bool rxrpc_abort_call(struct rxrpc_call *, rxrpc_seq_t, u32, int, enum rxrpc_abort_reason); +void rxrpc_prefail_call(struct rxrpc_call *, enum rxrpc_call_completion, int); + +static inline void rxrpc_set_call_state(struct rxrpc_call *call, + enum rxrpc_call_state state) +{ + /* Order write of completion info before write of ->state. */ + smp_store_release(&call->_state, state); +} + +static inline enum rxrpc_call_state __rxrpc_call_state(const struct rxrpc_call *call) +{ + return call->_state; /* Only inside I/O thread */ +} + +static inline bool __rxrpc_call_is_complete(const struct rxrpc_call *call) +{ + return __rxrpc_call_state(call) == RXRPC_CALL_COMPLETE; +} static inline enum rxrpc_call_state rxrpc_call_state(const struct rxrpc_call *call) { - /* Order read ->state before read ->error. */ - return smp_load_acquire(&call->state); + /* Order read ->state before read of completion info. */ + return smp_load_acquire(&call->_state); } static inline bool rxrpc_call_is_complete(const struct rxrpc_call *call) diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c index a132d486dea0..3fbf2fcaaf9e 100644 --- a/net/rxrpc/call_accept.c +++ b/net/rxrpc/call_accept.c @@ -99,7 +99,7 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, if (!call) return -ENOMEM; call->flags |= (1 << RXRPC_CALL_IS_SERVICE); - call->state = RXRPC_CALL_SERVER_PREALLOC; + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_PREALLOC); __set_bit(RXRPC_CALL_EV_INITIAL_PING, &call->events); trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 67cc46b83e37..ca69c37de85f 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -257,20 +257,13 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) */ static void rxrpc_begin_service_reply(struct rxrpc_call *call) { - unsigned long now; - - spin_lock(&call->state_lock); - - if (call->state == RXRPC_CALL_SERVER_ACK_REQUEST) { - now = jiffies; - call->state = RXRPC_CALL_SERVER_SEND_REPLY; - WRITE_ONCE(call->delay_ack_at, now + MAX_JIFFY_OFFSET); - if (call->ackr_reason == RXRPC_ACK_DELAY) - call->ackr_reason = 0; - trace_rxrpc_timer(call, rxrpc_timer_init_for_send_reply, now); - } + unsigned long now = jiffies; - spin_unlock(&call->state_lock); + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SEND_REPLY); + WRITE_ONCE(call->delay_ack_at, now + MAX_JIFFY_OFFSET); + if (call->ackr_reason == RXRPC_ACK_DELAY) + call->ackr_reason = 0; + trace_rxrpc_timer(call, rxrpc_timer_init_for_send_reply, now); } /* @@ -281,18 +274,16 @@ static void rxrpc_close_tx_phase(struct rxrpc_call *call) { _debug("________awaiting reply/ACK__________"); - spin_lock(&call->state_lock); - switch (call->state) { + switch (__rxrpc_call_state(call)) { case RXRPC_CALL_CLIENT_SEND_REQUEST: - call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY; + rxrpc_set_call_state(call, RXRPC_CALL_CLIENT_AWAIT_REPLY); break; case RXRPC_CALL_SERVER_SEND_REPLY: - call->state = RXRPC_CALL_SERVER_AWAIT_ACK; + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_AWAIT_ACK); break; default: break; } - spin_unlock(&call->state_lock); } static bool rxrpc_tx_window_has_space(struct rxrpc_call *call) @@ -339,7 +330,7 @@ static void rxrpc_decant_prepared_tx(struct rxrpc_call *call) static void rxrpc_transmit_some_data(struct rxrpc_call *call) { - switch (call->state) { + switch (__rxrpc_call_state(call)) { case RXRPC_CALL_SERVER_ACK_REQUEST: if (list_empty(&call->tx_sendmsg)) return; @@ -388,15 +379,16 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) //printk("\n--------------------\n"); _enter("{%d,%s,%lx}", - call->debug_id, rxrpc_call_states[call->state], call->events); + call->debug_id, rxrpc_call_states[__rxrpc_call_state(call)], + call->events); - if (call->state == RXRPC_CALL_COMPLETE) + if (__rxrpc_call_is_complete(call)) goto out; if (!call->conn) { printk("\n"); printk("\n"); - kdebug("no conn %u", call->state); + kdebug("no conn %u", __rxrpc_call_state(call)); printk("\n"); } @@ -417,7 +409,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) } t = READ_ONCE(call->expect_req_by); - if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST && + if (__rxrpc_call_state(call) == RXRPC_CALL_SERVER_RECV_REQUEST && time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_idle, now); expired = true; @@ -501,7 +493,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_lost_ack); - if (resend && call->state != RXRPC_CALL_CLIENT_RECV_REPLY) + if (resend && __rxrpc_call_state(call) != RXRPC_CALL_CLIENT_RECV_REPLY) rxrpc_resend(call, NULL); if (test_and_clear_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags)) @@ -522,7 +514,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) } /* Make sure the timer is restarted */ - if (call->state != RXRPC_CALL_COMPLETE) { + if (!__rxrpc_call_is_complete(call)) { next = call->expect_rx_by; #define set(T) { t = READ_ONCE(T); if (time_before(t, next)) next = t; } @@ -543,7 +535,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) } out: - if (call->state == RXRPC_CALL_COMPLETE) { + if (__rxrpc_call_is_complete(call)) { del_timer_sync(&call->timer); if (!test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) rxrpc_disconnect_call(call); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 6a28956deb9f..88b9c4f5b122 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -69,7 +69,7 @@ static void rxrpc_call_timer_expired(struct timer_list *t) _enter("%d", call->debug_id); - if (call->state < RXRPC_CALL_COMPLETE) { + if (!__rxrpc_call_is_complete(call)) { trace_rxrpc_timer_expired(call, jiffies); rxrpc_poke_call(call, rxrpc_call_poke_timer); } @@ -162,7 +162,6 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, init_waitqueue_head(&call->waitq); spin_lock_init(&call->notify_lock); spin_lock_init(&call->tx_lock); - spin_lock_init(&call->state_lock); refcount_set(&call->ref, 1); call->debug_id = debug_id; call->tx_total_len = -1; @@ -212,7 +211,6 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, now = ktime_get_real(); call->acks_latest_ts = now; call->cong_tstamp = now; - call->state = RXRPC_CALL_CLIENT_AWAIT_CONN; call->dest_srx = *srx; call->interruptibility = p->interruptibility; call->tx_total_len = p->tx_total_len; @@ -228,11 +226,13 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, ret = rxrpc_init_client_call_security(call); if (ret < 0) { - __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 0, ret); + rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, ret); rxrpc_put_call(call, rxrpc_call_put_discard_error); return ERR_PTR(ret); } + rxrpc_set_call_state(call, RXRPC_CALL_CLIENT_AWAIT_CONN); + trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), p->user_call_ID, rxrpc_call_new_client); @@ -385,8 +385,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, error_dup_user_ID: write_unlock(&rx->call_lock); release_sock(&rx->sk); - __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, - RX_CALL_DEAD, -EEXIST); + rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, -EEXIST); trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), 0, rxrpc_call_see_userid_exists); rxrpc_release_call(rx, call); @@ -404,8 +403,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), ret, rxrpc_call_see_connect_failed); set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); - __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, - RX_CALL_DEAD, ret); + rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, ret); _leave(" = c=%08x [err]", call->debug_id); return call; } @@ -428,23 +426,24 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, call->call_id = sp->hdr.callNumber; call->dest_srx.srx_service = sp->hdr.serviceId; call->cid = sp->hdr.cid; - call->state = RXRPC_CALL_SERVER_SECURING; call->cong_tstamp = skb->tstamp; + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING); + spin_lock(&conn->state_lock); switch (conn->state) { case RXRPC_CONN_SERVICE_UNSECURED: case RXRPC_CONN_SERVICE_CHALLENGING: - call->state = RXRPC_CALL_SERVER_SECURING; + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING); break; case RXRPC_CONN_SERVICE: - call->state = RXRPC_CALL_SERVER_RECV_REQUEST; + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST); break; case RXRPC_CONN_ABORTED: - __rxrpc_set_call_completion(call, conn->completion, - conn->abort_code, conn->error); + rxrpc_set_call_completion(call, conn->completion, + conn->abort_code, conn->error); break; default: BUG(); @@ -611,7 +610,7 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace why) dead = __refcount_dec_and_test(&call->ref, &r); trace_rxrpc_call(debug_id, r - 1, 0, why); if (dead) { - ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); + ASSERTCMP(__rxrpc_call_state(call), ==, RXRPC_CALL_COMPLETE); if (!list_empty(&call->link)) { spin_lock(&rxnet->call_lock); @@ -674,7 +673,7 @@ void rxrpc_cleanup_call(struct rxrpc_call *call) { memset(&call->sock_node, 0xcd, sizeof(call->sock_node)); - ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); + ASSERTCMP(__rxrpc_call_state(call), ==, RXRPC_CALL_COMPLETE); ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags)); del_timer(&call->timer); @@ -712,7 +711,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet) pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n", call, refcount_read(&call->ref), - rxrpc_call_states[call->state], + rxrpc_call_states[__rxrpc_call_state(call)], call->flags, call->events); spin_unlock(&rxnet->call_lock); diff --git a/net/rxrpc/call_state.c b/net/rxrpc/call_state.c index 24d240773a6f..c1f131618ac4 100644 --- a/net/rxrpc/call_state.c +++ b/net/rxrpc/call_state.c @@ -10,83 +10,60 @@ /* * Transition a call to the complete state. */ -bool __rxrpc_set_call_completion(struct rxrpc_call *call, +bool rxrpc_set_call_completion(struct rxrpc_call *call, enum rxrpc_call_completion compl, u32 abort_code, int error) { - if (call->state < RXRPC_CALL_COMPLETE) { - call->abort_code = abort_code; - call->error = error; - call->completion = compl; - /* Allow reader of completion state to operate locklessly */ - smp_store_release(&call->state, RXRPC_CALL_COMPLETE); - trace_rxrpc_call_complete(call); - wake_up(&call->waitq); - rxrpc_notify_socket(call); - return true; - } - return false; -} - -bool rxrpc_set_call_completion(struct rxrpc_call *call, - enum rxrpc_call_completion compl, - u32 abort_code, - int error) -{ - bool ret = false; + if (__rxrpc_call_state(call) == RXRPC_CALL_COMPLETE) + return false; - if (call->state < RXRPC_CALL_COMPLETE) { - spin_lock(&call->state_lock); - ret = __rxrpc_set_call_completion(call, compl, abort_code, error); - spin_unlock(&call->state_lock); - } - return ret; + call->abort_code = abort_code; + call->error = error; + call->completion = compl; + /* Allow reader of completion state to operate locklessly */ + rxrpc_set_call_state(call, RXRPC_CALL_COMPLETE); + trace_rxrpc_call_complete(call); + wake_up(&call->waitq); + rxrpc_notify_socket(call); + return true; } /* * Record that a call successfully completed. */ -bool __rxrpc_call_completed(struct rxrpc_call *call) -{ - return __rxrpc_set_call_completion(call, RXRPC_CALL_SUCCEEDED, 0, 0); -} - bool rxrpc_call_completed(struct rxrpc_call *call) { - bool ret = false; - - if (call->state < RXRPC_CALL_COMPLETE) { - spin_lock(&call->state_lock); - ret = __rxrpc_call_completed(call); - spin_unlock(&call->state_lock); - } - return ret; + return rxrpc_set_call_completion(call, RXRPC_CALL_SUCCEEDED, 0, 0); } /* * Record that a call is locally aborted. */ -bool __rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, - u32 abort_code, int error, - enum rxrpc_abort_reason why) +bool rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, + u32 abort_code, int error, + enum rxrpc_abort_reason why) { trace_rxrpc_abort(call->debug_id, why, call->cid, call->call_id, seq, abort_code, error); - return __rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED, - abort_code, error); + if (!rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED, + abort_code, error)) + return false; + rxrpc_send_abort_packet(call); + return true; } -bool rxrpc_abort_call(struct rxrpc_call *call, rxrpc_seq_t seq, - u32 abort_code, int error, - enum rxrpc_abort_reason why) +/* + * Record that a call errored out before even getting off the ground, thereby + * setting the state to allow it to be destroyed. + */ +void rxrpc_prefail_call(struct rxrpc_call *call, enum rxrpc_call_completion compl, + int error) { - bool ret; - - spin_lock(&call->state_lock); - ret = __rxrpc_abort_call(call, seq, abort_code, error, why); - spin_unlock(&call->state_lock); - if (ret) - rxrpc_send_abort_packet(call); - return ret; + call->abort_code = RX_CALL_DEAD; + call->error = error; + call->completion = compl; + call->_state = RXRPC_CALL_COMPLETE; + trace_rxrpc_call_complete(call); + __set_bit(RXRPC_CALL_RELEASED, &call->flags); } diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 37d8d3349e81..8e3e0ffa4417 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -553,9 +553,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, trace_rxrpc_connect_call(call); - spin_lock(&call->state_lock); - call->state = RXRPC_CALL_CLIENT_SEND_REQUEST; - spin_unlock(&call->state_lock); + rxrpc_set_call_state(call, RXRPC_CALL_CLIENT_SEND_REQUEST); /* Paired with the read barrier in rxrpc_connect_call(). This orders * cid and epoch in the connection wrt to call_id without the need to @@ -687,7 +685,7 @@ static int rxrpc_wait_for_channel(struct rxrpc_bundle *bundle, set_current_state(TASK_UNINTERRUPTIBLE); break; } - if (READ_ONCE(call->state) != RXRPC_CALL_CLIENT_AWAIT_CONN) + if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) break; if ((call->interruptibility == RXRPC_INTERRUPTIBLE || call->interruptibility == RXRPC_PREINTERRUPTIBLE) && @@ -726,7 +724,7 @@ int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp) goto out; } - if (call->state == RXRPC_CALL_CLIENT_AWAIT_CONN) { + if (rxrpc_call_state(call) == RXRPC_CALL_CLIENT_AWAIT_CONN) { ret = rxrpc_wait_for_channel(bundle, call, gfp); if (ret < 0) goto wait_failed; @@ -745,7 +743,7 @@ int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp) list_del_init(&call->chan_wait_link); spin_unlock(&bundle->channel_lock); - if (call->state != RXRPC_CALL_CLIENT_AWAIT_CONN) { + if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) { ret = 0; goto granted_channel; } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 434fa7e2f515..8009d7e62ae6 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -230,14 +230,9 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn) */ static void rxrpc_call_is_secure(struct rxrpc_call *call) { - _enter("%p", call); - if (call) { - spin_lock(&call->state_lock); - if (call->state == RXRPC_CALL_SERVER_SECURING) { - call->state = RXRPC_CALL_SERVER_RECV_REQUEST; - rxrpc_notify_socket(call); - } - spin_unlock(&call->state_lock); + if (call && __rxrpc_call_state(call) == RXRPC_CALL_SERVER_SECURING) { + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST); + rxrpc_notify_socket(call); } } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index b0ac846b16fc..84e23954290c 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -184,7 +184,7 @@ void rxrpc_congestion_degrade(struct rxrpc_call *call) if (call->cong_mode != RXRPC_CALL_SLOW_START && call->cong_mode != RXRPC_CALL_CONGEST_AVOIDANCE) return; - if (call->state == RXRPC_CALL_CLIENT_AWAIT_REPLY) + if (__rxrpc_call_state(call) == RXRPC_CALL_CLIENT_AWAIT_REPLY) return; rtt = ns_to_ktime(call->peer->srtt_us * (1000 / 8)); @@ -252,43 +252,31 @@ static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to, static void rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, enum rxrpc_abort_reason abort_why) { - unsigned int state; - ASSERT(test_bit(RXRPC_CALL_TX_LAST, &call->flags)); - spin_lock(&call->state_lock); - - state = call->state; - switch (state) { + switch (__rxrpc_call_state(call)) { case RXRPC_CALL_CLIENT_SEND_REQUEST: case RXRPC_CALL_CLIENT_AWAIT_REPLY: - if (reply_begun) - call->state = state = RXRPC_CALL_CLIENT_RECV_REPLY; - else - call->state = state = RXRPC_CALL_CLIENT_AWAIT_REPLY; + if (reply_begun) { + rxrpc_set_call_state(call, RXRPC_CALL_CLIENT_RECV_REPLY); + trace_rxrpc_txqueue(call, rxrpc_txqueue_end); + break; + } + + rxrpc_set_call_state(call, RXRPC_CALL_CLIENT_AWAIT_REPLY); + trace_rxrpc_txqueue(call, rxrpc_txqueue_await_reply); break; case RXRPC_CALL_SERVER_AWAIT_ACK: - __rxrpc_call_completed(call); - state = call->state; + rxrpc_call_completed(call); + trace_rxrpc_txqueue(call, rxrpc_txqueue_end); break; default: - goto bad_state; + kdebug("end_tx %s", rxrpc_call_states[__rxrpc_call_state(call)]); + rxrpc_proto_abort(call, call->tx_top, abort_why); + break; } - - spin_unlock(&call->state_lock); - if (state == RXRPC_CALL_CLIENT_AWAIT_REPLY) - trace_rxrpc_txqueue(call, rxrpc_txqueue_await_reply); - else - trace_rxrpc_txqueue(call, rxrpc_txqueue_end); - _leave(" = ok"); - return; - -bad_state: - spin_unlock(&call->state_lock); - kdebug("end_tx %s", rxrpc_call_states[call->state]); - rxrpc_proto_abort(call, call->tx_top, abort_why); } /* @@ -326,30 +314,23 @@ static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) { rxrpc_seq_t whigh = READ_ONCE(call->rx_highest_seq); - _enter("%d,%s", call->debug_id, rxrpc_call_states[call->state]); + _enter("%d,%s", call->debug_id, rxrpc_call_states[__rxrpc_call_state(call)]); trace_rxrpc_receive(call, rxrpc_receive_end, 0, whigh); - if (rxrpc_call_state(call) == RXRPC_CALL_CLIENT_RECV_REPLY) - rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_terminal_ack); - - spin_lock(&call->state_lock); - - switch (call->state) { + switch (__rxrpc_call_state(call)) { case RXRPC_CALL_CLIENT_RECV_REPLY: - __rxrpc_call_completed(call); - spin_unlock(&call->state_lock); + rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_terminal_ack); + rxrpc_call_completed(call); break; case RXRPC_CALL_SERVER_RECV_REQUEST: - call->state = RXRPC_CALL_SERVER_ACK_REQUEST; + rxrpc_set_call_state(call, RXRPC_CALL_SERVER_ACK_REQUEST); call->expect_req_by = jiffies + MAX_JIFFY_OFFSET; - spin_unlock(&call->state_lock); - rxrpc_propose_delay_ACK(call, serial, - rxrpc_propose_ack_processing_op); + rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_processing_op); break; + default: - spin_unlock(&call->state_lock); break; } } @@ -583,7 +564,6 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - enum rxrpc_call_state state; rxrpc_serial_t serial = sp->hdr.serial; rxrpc_seq_t seq0 = sp->hdr.seq; @@ -591,11 +571,20 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) call->ackr_window, call->ackr_wtop, call->rx_highest_seq, skb->len, seq0); - state = READ_ONCE(call->state); - if (state >= RXRPC_CALL_COMPLETE) + if (__rxrpc_call_is_complete(call)) return; - if (state == RXRPC_CALL_SERVER_RECV_REQUEST) { + switch (__rxrpc_call_state(call)) { + case RXRPC_CALL_CLIENT_SEND_REQUEST: + case RXRPC_CALL_CLIENT_AWAIT_REPLY: + /* Received data implicitly ACKs all of the request + * packets we sent when we're acting as a client. + */ + if (!rxrpc_receiving_reply(call)) + goto out_notify; + break; + + case RXRPC_CALL_SERVER_RECV_REQUEST: { unsigned long timo = READ_ONCE(call->next_req_timo); unsigned long now, expect_req_by; @@ -606,15 +595,12 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_reduce_call_timer(call, expect_req_by, now, rxrpc_timer_set_for_idle); } + break; } - /* Received data implicitly ACKs all of the request packets we sent - * when we're acting as a client. - */ - if ((state == RXRPC_CALL_CLIENT_SEND_REQUEST || - state == RXRPC_CALL_CLIENT_AWAIT_REPLY) && - !rxrpc_receiving_reply(call)) - goto out_notify; + default: + break; + } if (!rxrpc_input_split_jumbo(call, skb)) { rxrpc_proto_abort(call, sp->hdr.seq, rxrpc_badmsg_bad_jumbo); @@ -904,7 +890,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_zero); /* Ignore ACKs unless we are or have just been transmitting. */ - switch (READ_ONCE(call->state)) { + switch (__rxrpc_call_state(call)) { case RXRPC_CALL_CLIENT_SEND_REQUEST: case RXRPC_CALL_CLIENT_AWAIT_REPLY: case RXRPC_CALL_SERVER_SEND_REPLY: @@ -1027,7 +1013,7 @@ void rxrpc_input_call_packet(struct rxrpc_call *call, struct sk_buff *skb) */ void rxrpc_implicit_end_call(struct rxrpc_call *call, struct sk_buff *skb) { - switch (READ_ONCE(call->state)) { + switch (__rxrpc_call_state(call)) { case RXRPC_CALL_SERVER_AWAIT_ACK: rxrpc_call_completed(call); fallthrough; diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 4244fbf87fe6..6b2022240076 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -241,7 +241,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } rxrpc_tx_backoff(call, ret); - if (call->state < RXRPC_CALL_COMPLETE) { + if (!__rxrpc_call_is_complete(call)) { if (ret < 0) rxrpc_cancel_rtt_probe(call, serial, rtt_slot); rxrpc_set_keepalive(call); @@ -696,7 +696,7 @@ void rxrpc_send_keepalive(struct rxrpc_peer *peer) static inline void rxrpc_instant_resend(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { - if (call->state < RXRPC_CALL_COMPLETE) + if (!__rxrpc_call_is_complete(call)) kdebug("resend"); } diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index 5c56bcab578c..e5cbe88daaa5 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -50,6 +50,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) struct rxrpc_local *local; struct rxrpc_call *call; struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); + enum rxrpc_call_state state; unsigned long timeout = 0; rxrpc_seq_t acks_hard_ack; char lbuff[50], rbuff[50]; @@ -73,7 +74,8 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) sprintf(rbuff, "%pISpc", &call->dest_srx.transport); - if (call->state != RXRPC_CALL_SERVER_PREALLOC) { + state = rxrpc_call_state(call); + if (state != RXRPC_CALL_SERVER_PREALLOC) { timeout = READ_ONCE(call->expect_rx_by); timeout -= jiffies; } @@ -89,7 +91,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) call->call_id, rxrpc_is_service_call(call) ? "Svc" : "Clt", refcount_read(&call->ref), - rxrpc_call_states[call->state], + rxrpc_call_states[state], call->abort_code, call->debug_id, acks_hard_ack, READ_ONCE(call->tx_top) - acks_hard_ack, diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index 3dea0dc319a2..9abf6bb56b65 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -1143,7 +1143,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, call = rcu_dereference_protected( conn->channels[i].call, lockdep_is_held(&conn->bundle->channel_lock)); - if (call && call->state < RXRPC_CALL_COMPLETE) { + if (call && !__rxrpc_call_is_complete(call)) { rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, rxkad_abort_resp_call_state); goto protocol_error_unlock; diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 5b17ee1cbfbf..b0182de63226 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -701,9 +701,6 @@ int rxrpc_kernel_send_data(struct socket *sock, struct rxrpc_call *call, mutex_lock(&call->user_mutex); - _debug("CALL %d USR %lx ST %d on CONN %p", - call->debug_id, call->user_call_ID, call->state, call->conn); - ret = rxrpc_send_data(rxrpc_sk(sock->sk), call, msg, len, notify_end_tx, &dropped_lock); if (ret == -ESHUTDOWN) From patchwork Tue Dec 6 16:02:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30418 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2908753wrr; Tue, 6 Dec 2022 08:08:54 -0800 (PST) X-Google-Smtp-Source: AA0mqf68k2OCI+dQCnuZXJsC9mlZjHPmfNH/8e80GV0X9gC7Cz8ExWTPsV6acFRRVCzaJiRkkleC X-Received: by 2002:a17:90a:f481:b0:219:56eb:c228 with SMTP id bx1-20020a17090af48100b0021956ebc228mr36326556pjb.81.1670342934568; Tue, 06 Dec 2022 08:08:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342934; cv=none; d=google.com; s=arc-20160816; b=iNbFbcDvZYna5Y17MGqXEA7gTSh52p7DvgR2D1tPsdCvdOtxCEfMOC/ToRlJJalZIw 1WQeFhxWWMpc3/La8wvPd7kIrVWmTjT37/3A4j91gFfQaFKNOnaXEPdT7BIjJD377u+a FVaEPApn1flFh9uplJmeLBKE8C2gTG3RBqbCjGy6GP34ipa8g2GYJtl8Ti9H7wwWvbA+ ME6iWinQiNxJns5JeJ+edww6FqtM9kCsScGm1jeO12ODajx6row0RXgLU5tkIyUoGdgy U1IU4qTA2/w80mIYwSAeox87Ceg5lOzYTQmER0miesdOSBYFqO1hk7E3BtyvxspzwEZr hQLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=5ZZUM4hRQWwL0ArwwcTJwT8ehiB7UX/Wr5YZI17NOK0=; b=R5uks1Rql+CMaHzWx1cuIXxozCIdGMg+zAjFAsSMLuz3Eu3yfjm+yr4c0GGBy0NDwg 7b9vquqcxRRuBjRsloOQXnzjB9bxjW1ZEPPUK+EiVV1CtJKKchGDCqnF+N9Ftc/8t2H2 WU3HkupJVA5O0GtyMOqnY52E6xy5ByS7SpjuerCo07qkpK+toxygadt2Sc3eQDtBhkWc sYlyEa0Vd+X8gQsW0hdS19y6gYMgiF/fUxuf3fHCNJb7HwEal4HvK5WDl7gBtlkAlLqX blAYWwxfhKGqAPZwGSKlI8TTyf85JnMMPwC809zQ1lcLjtYvqemR8ekGWUvTQmeDPvB1 5hOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HOTjPazF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 30-20020a17090a001e00b002131bb4043asi19554210pja.177.2022.12.06.08.08.20; Tue, 06 Dec 2022 08:08:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HOTjPazF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234621AbiLFQHj (ORCPT + 99 others); Tue, 6 Dec 2022 11:07:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230233AbiLFQGo (ORCPT ); Tue, 6 Dec 2022 11:06:44 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68AFC303E4 for ; Tue, 6 Dec 2022 08:02:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342549; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ZZUM4hRQWwL0ArwwcTJwT8ehiB7UX/Wr5YZI17NOK0=; b=HOTjPazF9R7pR12eV9ujBqg3gmGk0pWuuCcg3O7Aa8N0He6bPblX7CAJb0kG8Xndkn86TA q1qunIUpaZHUo655thUnFWgbO/ZsCKRXMbg+znTkyU8xPmdqMxuu5kdk62LFylV2lQ0+OY esPZu20KWSKCNWqMeuWlv6z//iBA5MU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-421-E27LNqXzNgyhSyr0AYHCrw-1; Tue, 06 Dec 2022 11:02:27 -0500 X-MC-Unique: E27LNqXzNgyhSyr0AYHCrw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EF083C01DFB; Tue, 6 Dec 2022 16:02:27 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id C820B2166B26; Tue, 6 Dec 2022 16:02:26 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 27/32] rxrpc: Make the local endpoint hold a ref on a connected call From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:02:25 +0000 Message-ID: <167034254511.1105287.9817497591703679657.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481513320539893?= X-GMAIL-MSGID: =?utf-8?q?1751481513320539893?= Make the local endpoint and it's I/O thread hold a reference on a connected call until that call is disconnected. Without this, we're reliant on either the AF_RXRPC socket to hold a ref (which is dropped when the call is released) or a queued work item to hold a ref (the work item is being replaced with the I/O thread). Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 2 ++ net/rxrpc/conn_client.c | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 4d5eb08025c0..81182efca327 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -257,6 +257,7 @@ E_(rxrpc_client_to_idle, "->Idle") #define rxrpc_call_traces \ + EM(rxrpc_call_get_io_thread, "GET iothread") \ EM(rxrpc_call_get_input, "GET input ") \ EM(rxrpc_call_get_kernel_service, "GET krnl-srv") \ EM(rxrpc_call_get_notify_socket, "GET notify ") \ @@ -269,6 +270,7 @@ EM(rxrpc_call_new_prealloc_service, "NEW prealloc") \ EM(rxrpc_call_put_discard_prealloc, "PUT disc-pre") \ EM(rxrpc_call_put_discard_error, "PUT disc-err") \ + EM(rxrpc_call_put_io_thread, "PUT iothread") \ EM(rxrpc_call_put_input, "PUT input ") \ EM(rxrpc_call_put_kernel, "PUT kernel ") \ EM(rxrpc_call_put_poke, "PUT poke ") \ diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 8e3e0ffa4417..26b0ad38f91b 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -718,8 +718,11 @@ int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp) rxrpc_discard_expired_client_conns(&rxnet->client_conn_reaper); + rxrpc_get_call(call, rxrpc_call_get_io_thread); + bundle = rxrpc_prep_call(call, gfp); if (IS_ERR(bundle)) { + rxrpc_put_call(call, rxrpc_call_get_io_thread); ret = PTR_ERR(bundle); goto out; } @@ -902,6 +905,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call out: spin_unlock(&bundle->channel_lock); + rxrpc_put_call(call, rxrpc_call_put_io_thread); _leave(""); return; } From patchwork Tue Dec 6 16:02:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30420 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2908972wrr; Tue, 6 Dec 2022 08:09:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf4WvtdSXiwKsBdvLzVeKVHJZLjeJ0PM48z4hIKvGmrzXPFbFP4IWwbKxCxlTVvGjC/sbtgY X-Received: by 2002:a17:902:d711:b0:17f:cdd1:7ab1 with SMTP id w17-20020a170902d71100b0017fcdd17ab1mr71083848ply.86.1670342954878; Tue, 06 Dec 2022 08:09:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342954; cv=none; d=google.com; s=arc-20160816; b=fRG3P1Plwa6Ywtr8rIYzH4RKFfuP53JBFNbAMOkQcoQeaW8gqq8hgWMzlZbjRiAS/u qMnWfuP9eeFAtSuL+Tp+1QlswSMJ61eBeh/J3mNQQ06XD/a/nORbdizhbmhKFW6KK9sM eUu3Gdm10+8PstT4kpL1D5NhBVCqjCRj0RFVPim1Y2lDw0IeRNRPp39PpWwOExkEzMlg xTVHy3J9OQ17K3sx6dzi/iYCM4ccMNTyjUX5HloZcUFTKm7SFDj8NV1nU9/NQDvIypXf G/d4KU1akXJfzG7fi6GKva6SACJJGotamd6A7hkR1DYcNlCSEQJAaDiIOKyaOLbRjx6I IEEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=QMtsa05Y1Jkg7eFi1O02w0xDMwVjRE6ngpR5aS6K+14=; b=xFOei7YhsIl+emsicmTleh39+iWlH1Fcv7uYXFHv2ok1IBtof+EQ2xCNTk6kj3ULwp r4d9bWGsR5luxNA2H2ZRerNGIJyi7LARIY7h9eeIk4nuNJp07hPixhw5ikVA5qm76ENx ieBGIoMfcQKIVu9Ee2IE3V1WHExqOk9B/1FqGYu6R8w370qp9N3JLQ6qMcAOqILv3tuz CTqyrvLz5kFSf7NHs9Osc7JAkM3Bup0r9UgUURA2k17ix4YZzvIyYOMWhaOdcMGsFRAp jlGBrgQdwihgVJsJ6VWHJmgRdzvMDOmi4TtEEalsNoM+e6e0tbnTwFcz7B0WQt/yW7sw vaNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=iak2N8dX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bc22-20020a656d96000000b0047798cc3c41si17471685pgb.238.2022.12.06.08.08.57; Tue, 06 Dec 2022 08:09:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=iak2N8dX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235428AbiLFQII (ORCPT + 99 others); Tue, 6 Dec 2022 11:08:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235642AbiLFQH3 (ORCPT ); Tue, 6 Dec 2022 11:07:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA57030F4D for ; Tue, 6 Dec 2022 08:03:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342561; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QMtsa05Y1Jkg7eFi1O02w0xDMwVjRE6ngpR5aS6K+14=; b=iak2N8dX2NCDmkUsWNj2eREk0BBNCKsNKrQNUdYr9J/fO3vo6DdPx1Zf5c93MY/phJNg/C Mwhzm4nSW6fJqKIEb3i5M/PMOek8py4tB4JPA/MntVVZUrN1wdSD9Ucm9svnptJs+LzRFN gI5ngB/iH9DRNUYKc7dUMYg4liznC2o= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-644-5CunqjA9Nm-gBvWUt6tGuw-1; Tue, 06 Dec 2022 11:02:36 -0500 X-MC-Unique: 5CunqjA9Nm-gBvWUt6tGuw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 20BC73C01D91; Tue, 6 Dec 2022 16:02:36 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 48A771759E; Tue, 6 Dec 2022 16:02:35 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 28/32] rxrpc: Move the client conn cache management to the I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:02:32 +0000 Message-ID: <167034255260.1105287.2424525947618486729.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481534336944792?= X-GMAIL-MSGID: =?utf-8?q?1751481534336944792?= Move the management of the client connection cache to the I/O thread rather than managing it from the namespace as an aggregate across all the local endpoints within the namespace. This will allow a load of locking to be got rid of in a future patch as only the I/O thread will be looking at the this. The downside is that the total number of cached connections on the system can get higher because the limit is now per-local rather than per-netns. We can, however, keep the number of client conns in use across the entire netfs and use that to reduce the expiration time of idle connection. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/ar-internal.h | 17 +++++---- net/rxrpc/conn_client.c | 92 ++++++++++++++++------------------------------ net/rxrpc/conn_object.c | 1 - net/rxrpc/io_thread.c | 4 ++ net/rxrpc/local_object.c | 17 +++++++++ net/rxrpc/net_ns.c | 17 --------- 6 files changed, 62 insertions(+), 86 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 74f424d9001d..e79263927a6d 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -76,13 +76,7 @@ struct rxrpc_net { bool live; - bool kill_all_client_conns; atomic_t nr_client_conns; - spinlock_t client_conn_cache_lock; /* Lock for ->*_client_conns */ - struct mutex client_conn_discard_lock; /* Prevent multiple discarders */ - struct list_head idle_client_conns; - struct work_struct client_conn_reaper; - struct timer_list client_conn_reap_timer; struct hlist_head local_endpoints; struct mutex local_mutex; /* Lock for ->local_endpoints */ @@ -295,8 +289,16 @@ struct rxrpc_local { struct sk_buff_head rx_queue; /* Received packets */ struct list_head conn_attend_q; /* Conns requiring immediate attention */ struct list_head call_attend_q; /* Calls requiring immediate attention */ + struct rb_root client_bundles; /* Client connection bundles by socket params */ spinlock_t client_bundles_lock; /* Lock for client_bundles */ + bool kill_all_client_conns; + spinlock_t client_conn_cache_lock; /* Lock for ->*_client_conns */ + struct list_head idle_client_conns; + struct timer_list client_conn_reap_timer; + unsigned long client_conn_flags; +#define RXRPC_CLIENT_CONN_REAP_TIMER 0 /* The client conn reap timer expired */ + spinlock_t lock; /* access lock */ rwlock_t services_lock; /* lock for services list */ int debug_id; /* debug ID for printks */ @@ -942,8 +944,7 @@ void rxrpc_expose_client_call(struct rxrpc_call *); void rxrpc_disconnect_client_call(struct rxrpc_bundle *, struct rxrpc_call *); void rxrpc_deactivate_bundle(struct rxrpc_bundle *); void rxrpc_put_client_conn(struct rxrpc_connection *, enum rxrpc_conn_trace); -void rxrpc_discard_expired_client_conns(struct work_struct *); -void rxrpc_destroy_all_client_connections(struct rxrpc_net *); +void rxrpc_discard_expired_client_conns(struct rxrpc_local *); void rxrpc_clean_up_local_conns(struct rxrpc_local *); diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 26b0ad38f91b..6b6b2f72c0ed 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -578,17 +578,17 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, */ static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connection *conn) { - struct rxrpc_net *rxnet = bundle->local->rxnet; + struct rxrpc_local *local = bundle->local; bool drop_ref; if (!list_empty(&conn->cache_link)) { drop_ref = false; - spin_lock(&rxnet->client_conn_cache_lock); + spin_lock(&local->client_conn_cache_lock); if (!list_empty(&conn->cache_link)) { list_del_init(&conn->cache_link); drop_ref = true; } - spin_unlock(&rxnet->client_conn_cache_lock); + spin_unlock(&local->client_conn_cache_lock); if (drop_ref) rxrpc_put_connection(conn, rxrpc_conn_put_unidle); } @@ -710,14 +710,10 @@ static int rxrpc_wait_for_channel(struct rxrpc_bundle *bundle, int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp) { struct rxrpc_bundle *bundle; - struct rxrpc_local *local = call->local; - struct rxrpc_net *rxnet = local->rxnet; int ret = 0; _enter("{%d,%lx},", call->debug_id, call->user_call_ID); - rxrpc_discard_expired_client_conns(&rxnet->client_conn_reaper); - rxrpc_get_call(call, rxrpc_call_get_io_thread); bundle = rxrpc_prep_call(call, gfp); @@ -787,14 +783,14 @@ void rxrpc_expose_client_call(struct rxrpc_call *call) /* * Set the reap timer. */ -static void rxrpc_set_client_reap_timer(struct rxrpc_net *rxnet) +static void rxrpc_set_client_reap_timer(struct rxrpc_local *local) { - if (!rxnet->kill_all_client_conns) { + if (!local->kill_all_client_conns) { unsigned long now = jiffies; unsigned long reap_at = now + rxrpc_conn_idle_client_expiry; - if (rxnet->live) - timer_reduce(&rxnet->client_conn_reap_timer, reap_at); + if (local->rxnet->live) + timer_reduce(&local->client_conn_reap_timer, reap_at); } } @@ -805,7 +801,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call { struct rxrpc_connection *conn; struct rxrpc_channel *chan = NULL; - struct rxrpc_net *rxnet = bundle->local->rxnet; + struct rxrpc_local *local = bundle->local; unsigned int channel; bool may_reuse; u32 cid; @@ -896,11 +892,11 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call conn->idle_timestamp = jiffies; rxrpc_get_connection(conn, rxrpc_conn_get_idle); - spin_lock(&rxnet->client_conn_cache_lock); - list_move_tail(&conn->cache_link, &rxnet->idle_client_conns); - spin_unlock(&rxnet->client_conn_cache_lock); + spin_lock(&local->client_conn_cache_lock); + list_move_tail(&conn->cache_link, &local->idle_client_conns); + spin_unlock(&local->client_conn_cache_lock); - rxrpc_set_client_reap_timer(rxnet); + rxrpc_set_client_reap_timer(local); } out: @@ -990,42 +986,34 @@ void rxrpc_kill_client_conn(struct rxrpc_connection *conn) * This may be called from conn setup or from a work item so cannot be * considered non-reentrant. */ -void rxrpc_discard_expired_client_conns(struct work_struct *work) +void rxrpc_discard_expired_client_conns(struct rxrpc_local *local) { struct rxrpc_connection *conn; - struct rxrpc_net *rxnet = - container_of(work, struct rxrpc_net, client_conn_reaper); unsigned long expiry, conn_expires_at, now; unsigned int nr_conns; _enter(""); - if (list_empty(&rxnet->idle_client_conns)) { + if (list_empty(&local->idle_client_conns)) { _leave(" [empty]"); return; } - /* Don't double up on the discarding */ - if (!mutex_trylock(&rxnet->client_conn_discard_lock)) { - _leave(" [already]"); - return; - } - /* We keep an estimate of what the number of conns ought to be after * we've discarded some so that we don't overdo the discarding. */ - nr_conns = atomic_read(&rxnet->nr_client_conns); + nr_conns = atomic_read(&local->rxnet->nr_client_conns); next: - spin_lock(&rxnet->client_conn_cache_lock); + spin_lock(&local->client_conn_cache_lock); - if (list_empty(&rxnet->idle_client_conns)) + if (list_empty(&local->idle_client_conns)) goto out; - conn = list_entry(rxnet->idle_client_conns.next, + conn = list_entry(local->idle_client_conns.next, struct rxrpc_connection, cache_link); - if (!rxnet->kill_all_client_conns) { + if (!local->kill_all_client_conns) { /* If the number of connections is over the reap limit, we * expedite discard by reducing the expiry timeout. We must, * however, have at least a short grace period to be able to do @@ -1048,7 +1036,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) trace_rxrpc_client(conn, -1, rxrpc_client_discard); list_del_init(&conn->cache_link); - spin_unlock(&rxnet->client_conn_cache_lock); + spin_unlock(&local->client_conn_cache_lock); rxrpc_unbundle_conn(conn); /* Drop the ->cache_link ref */ @@ -1066,32 +1054,11 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) * then things get messier. */ _debug("not yet"); - if (!rxnet->kill_all_client_conns) - timer_reduce(&rxnet->client_conn_reap_timer, conn_expires_at); + if (!local->kill_all_client_conns) + timer_reduce(&local->client_conn_reap_timer, conn_expires_at); out: - spin_unlock(&rxnet->client_conn_cache_lock); - mutex_unlock(&rxnet->client_conn_discard_lock); - _leave(""); -} - -/* - * Preemptively destroy all the client connection records rather than waiting - * for them to time out - */ -void rxrpc_destroy_all_client_connections(struct rxrpc_net *rxnet) -{ - _enter(""); - - spin_lock(&rxnet->client_conn_cache_lock); - rxnet->kill_all_client_conns = true; - spin_unlock(&rxnet->client_conn_cache_lock); - - del_timer_sync(&rxnet->client_conn_reap_timer); - - if (!rxrpc_queue_work(&rxnet->client_conn_reaper)) - _debug("destroy: queue failed"); - + spin_unlock(&local->client_conn_cache_lock); _leave(""); } @@ -1101,14 +1068,19 @@ void rxrpc_destroy_all_client_connections(struct rxrpc_net *rxnet) void rxrpc_clean_up_local_conns(struct rxrpc_local *local) { struct rxrpc_connection *conn, *tmp; - struct rxrpc_net *rxnet = local->rxnet; LIST_HEAD(graveyard); _enter(""); - spin_lock(&rxnet->client_conn_cache_lock); + spin_lock(&local->client_conn_cache_lock); + local->kill_all_client_conns = true; + spin_unlock(&local->client_conn_cache_lock); + + del_timer_sync(&local->client_conn_reap_timer); + + spin_lock(&local->client_conn_cache_lock); - list_for_each_entry_safe(conn, tmp, &rxnet->idle_client_conns, + list_for_each_entry_safe(conn, tmp, &local->idle_client_conns, cache_link) { if (conn->local == local) { atomic_dec(&conn->active); @@ -1117,7 +1089,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local) } } - spin_unlock(&rxnet->client_conn_cache_lock); + spin_unlock(&local->client_conn_cache_lock); while (!list_empty(&graveyard)) { conn = list_entry(graveyard.next, diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 52fad0744239..ed591a1f82c2 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -465,7 +465,6 @@ void rxrpc_destroy_all_connections(struct rxrpc_net *rxnet) _enter(""); atomic_dec(&rxnet->nr_conns); - rxrpc_destroy_all_client_connections(rxnet); del_timer_sync(&rxnet->service_conn_reap_timer); rxrpc_queue_work(&rxnet->service_conn_reaper); diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index a89f8ba465b1..6153ea162153 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -439,6 +439,10 @@ int rxrpc_io_thread(void *data) continue; } + if (test_and_clear_bit(RXRPC_CLIENT_CONN_REAP_TIMER, + &local->client_conn_flags)) + rxrpc_discard_expired_client_conns(local); + /* Deal with calls that want immediate attention. */ if ((call = list_first_entry_or_null(&local->call_attend_q, struct rxrpc_call, diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 57697adf52c6..6a463e90cd8b 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -82,6 +82,16 @@ static long rxrpc_local_cmp_key(const struct rxrpc_local *local, } } +static void rxrpc_client_conn_reap_timeout(struct timer_list *timer) +{ + struct rxrpc_local *local = + container_of(timer, struct rxrpc_local, client_conn_reap_timer); + + if (local->kill_all_client_conns && + test_and_set_bit(RXRPC_CLIENT_CONN_REAP_TIMER, &local->client_conn_flags)) + rxrpc_wake_up_io_thread(local); +} + /* * Allocate a new local endpoint. */ @@ -104,8 +114,15 @@ static struct rxrpc_local *rxrpc_alloc_local(struct net *net, skb_queue_head_init(&local->rx_queue); INIT_LIST_HEAD(&local->conn_attend_q); INIT_LIST_HEAD(&local->call_attend_q); + local->client_bundles = RB_ROOT; spin_lock_init(&local->client_bundles_lock); + local->kill_all_client_conns = false; + spin_lock_init(&local->client_conn_cache_lock); + INIT_LIST_HEAD(&local->idle_client_conns); + timer_setup(&local->client_conn_reap_timer, + rxrpc_client_conn_reap_timeout, 0); + spin_lock_init(&local->lock); rwlock_init(&local->services_lock); local->debug_id = atomic_inc_return(&rxrpc_debug_id); diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c index 5905530e2f33..a0319c040c25 100644 --- a/net/rxrpc/net_ns.c +++ b/net/rxrpc/net_ns.c @@ -10,15 +10,6 @@ unsigned int rxrpc_net_id; -static void rxrpc_client_conn_reap_timeout(struct timer_list *timer) -{ - struct rxrpc_net *rxnet = - container_of(timer, struct rxrpc_net, client_conn_reap_timer); - - if (rxnet->live) - rxrpc_queue_work(&rxnet->client_conn_reaper); -} - static void rxrpc_service_conn_reap_timeout(struct timer_list *timer) { struct rxrpc_net *rxnet = @@ -63,14 +54,6 @@ static __net_init int rxrpc_init_net(struct net *net) rxrpc_service_conn_reap_timeout, 0); atomic_set(&rxnet->nr_client_conns, 0); - rxnet->kill_all_client_conns = false; - spin_lock_init(&rxnet->client_conn_cache_lock); - mutex_init(&rxnet->client_conn_discard_lock); - INIT_LIST_HEAD(&rxnet->idle_client_conns); - INIT_WORK(&rxnet->client_conn_reaper, - rxrpc_discard_expired_client_conns); - timer_setup(&rxnet->client_conn_reap_timer, - rxrpc_client_conn_reap_timeout, 0); INIT_HLIST_HEAD(&rxnet->local_endpoints); mutex_init(&rxnet->local_mutex); From patchwork Tue Dec 6 16:02:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30421 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2909281wrr; Tue, 6 Dec 2022 08:09:41 -0800 (PST) X-Google-Smtp-Source: AA0mqf6ZRDkpauTNsBw2yOapPkaIXk3w8DMZ67nOm+jJVnkhcLCEMJdKj1AAcc8SOw83fNGHROOo X-Received: by 2002:a62:8302:0:b0:576:f1ee:ebb2 with SMTP id h2-20020a628302000000b00576f1eeebb2mr10373272pfe.78.1670342981137; Tue, 06 Dec 2022 08:09:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670342981; cv=none; d=google.com; s=arc-20160816; b=069SuoxJJw1ANI3v7L30LCDCTFwwcRsktrRstRcre5+Kv6CpPMGSSdmgHAiabhsXul GDdO+qyNLn48gNHRna0cjEbC3k4Ea81oKLoBwGAsHqKDlyWDF2IxeCxY07aJdPo/BORK bU+yWjw90MPvhwjbOArSoRC8BdXfeEEAYPhrVkEp1H2b/KneAqi9UmeEVLYny79vMlGy kTCCBrLZLwWfLi0cgRgHrrxj+QRVciKqK5V0bFnUy3dOQTSRRcC8wIHCm38WM4Mb5IPX PoukhWicwsgINnTbaTADDRMCuug2TNA2eLIuxFtP8F9VVCrdFVnJVXivP28uhcGQuixo oXxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=znQW4z4SYi8+QaTrNWHj3KVEHYa3n71qqjUAiVgQv9c=; b=CAR35emKxc4KeUEk19aD5ryOK/DbD+plhWw4n/cne707yNfzSZvgmzjQDwvcAFl2Ek bM2hdb20bMU52Ces/If7dBeSjAKdPRHJmk9q1SworsnKcvhgQ8fYJjmg2j/CiTbGQrP5 RU5KTdQkq5g8KEEhAupu4f26Q2omTYsaAQYz5PhsjZNO/uHkj5wyOTTfceYHRuLYJJuP 6ESM6XpJPttoEYgac53HGWoliUqQn0WdCYYmIHYuiyv/KriqXT8vqOY5Tt2MIXEokk50 ULVJoBzm/UbKRyh5wGxy2+bTc6UALeNPp/5Pbu/zxFzT2XNMVgKN0/uKxqnsRRlhoFvS b3gw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BCTkAlqc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u70-20020a638549000000b004772bd99e54si7012324pgd.32.2022.12.06.08.09.19; Tue, 06 Dec 2022 08:09:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BCTkAlqc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235451AbiLFQIT (ORCPT + 99 others); Tue, 6 Dec 2022 11:08:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235673AbiLFQHf (ORCPT ); Tue, 6 Dec 2022 11:07:35 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AE8A2EF04 for ; Tue, 6 Dec 2022 08:03:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342567; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=znQW4z4SYi8+QaTrNWHj3KVEHYa3n71qqjUAiVgQv9c=; b=BCTkAlqcF0Z55yLbgMywazleHNAtam6teAFQSNNAUZjb/93neOrvok2wq8SrMFTyn6WIPy MtUSTOEuV7+iaSOeApkvif7bOMkbCugPlVu+P8ycZqDk6hZH12fcC/mOco08n5WmrDzd/Y zOYnSGHNYl9zljI/2g5w1EmaecarmH0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-46-3hGmvom-N3iEHc874NyDFw-1; Tue, 06 Dec 2022 11:02:43 -0500 X-MC-Unique: 3hGmvom-N3iEHc874NyDFw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 96A79811E75; Tue, 6 Dec 2022 16:02:42 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id F0E0F2166B26; Tue, 6 Dec 2022 16:02:41 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 29/32] rxrpc: Show consumed and freed packets as non-dropped in dropwatch From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:02:41 +0000 Message-ID: <167034256128.1105287.11921813097806982711.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481562148551690?= X-GMAIL-MSGID: =?utf-8?q?1751481562148551690?= Set a reason when freeing a packet that has been consumed such that dropwatch doesn't complain that it has been dropped. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/skbuff.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/rxrpc/skbuff.c b/net/rxrpc/skbuff.c index ebe0c75e7b07..944320e65ea8 100644 --- a/net/rxrpc/skbuff.c +++ b/net/rxrpc/skbuff.c @@ -63,7 +63,7 @@ void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace why) if (skb) { int n = atomic_dec_return(select_skb_count(skb)); trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why); - kfree_skb(skb); + kfree_skb_reason(skb, SKB_CONSUMED); } } @@ -78,6 +78,6 @@ void rxrpc_purge_queue(struct sk_buff_head *list) int n = atomic_dec_return(select_skb_count(skb)); trace_rxrpc_skb(skb, refcount_read(&skb->users), n, rxrpc_skb_put_purge); - kfree_skb(skb); + kfree_skb_reason(skb, SKB_CONSUMED); } } From patchwork Tue Dec 6 16:02:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30422 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2909645wrr; Tue, 6 Dec 2022 08:10:14 -0800 (PST) X-Google-Smtp-Source: AA0mqf63pkAdegtQka7rftGaCN4qX/FRp1R9vG4DrXB2fkSV05hN/E+SjZW585cDdi6EdwUYrorw X-Received: by 2002:a17:90a:6c46:b0:21a:401:d2c0 with SMTP id x64-20020a17090a6c4600b0021a0401d2c0mr2514513pjj.34.1670343014122; Tue, 06 Dec 2022 08:10:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670343014; cv=none; d=google.com; s=arc-20160816; b=KgUUiwbnJuI5FyGJg5ZKfDv20CkU3qZg76VFLvckM4sKun7RNYrJcHuZ9Q0r6GU19B NL6rXVfmmS7OxG5Q5BJZByAxpYc3lg+wbvQAgPFtAZFdG68PNOOzMgQdq1GCWCmxOzD1 aQk/XHzSPqGi/feF47PpyxaM6yUaATKoHN531XBNCdxhUvosXnfoCmjqC6N3papNJMBQ vACbpESPwwmaSPU0+m4dtm6HhwyF3nbaw+O/x+ujeqfJPU9ffXtVjSZqopiUUStZLDst fuJp1AdU5B2GdZY5HsQNjoFVMU7KSOkL8uQ3+Ug9cG+j92uqfJCZYUznYjRZzWppzBHZ iB2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=x9juvaXVaL28eEnmQQQRq6GWfhzwua06aQgR2Ht6E8g=; b=KuRIv/4qXY5pQK+FsMBmm6WXXk71CmohALI6wHJMrvjyi631MuOC69Im5dcunP1oy3 V5Hqk8AtwLKm0hgvYSQJOnPIPnA3O2SsUSK61JsLn6slS2+rzARrwEtxv2J6jP3MaN7h GSfnGbXLjY9fuGjTTPXsy28590cTjMAFTll5+JOT68DrZntW20qRycmqoutrjiD63+WH GGp7rtXPgO5X9AcfX+z2Cw0mvG49d2ij/Qvo/O50gXqM5pT4AwR1X+kYnnPMqZ47rjkm 68OGxgaokUolNwZolQtoz3eziop30Luut5WT8diYj3WkIPjR4aNey3/w0HnQCaFlc+dP Ll6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bI4BorD8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l4-20020a170902ec0400b001887a1ffb17si16687515pld.223.2022.12.06.08.09.54; Tue, 06 Dec 2022 08:10:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bI4BorD8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235479AbiLFQIp (ORCPT + 99 others); Tue, 6 Dec 2022 11:08:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235401AbiLFQID (ORCPT ); Tue, 6 Dec 2022 11:08:03 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4468532070 for ; Tue, 6 Dec 2022 08:03:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342574; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=x9juvaXVaL28eEnmQQQRq6GWfhzwua06aQgR2Ht6E8g=; b=bI4BorD8ngRtA4ZD+N8a9WI9O6fyPBgD7pcBb+bMB5/HgEZy70wH2aFcicUu9ptvQxIQmw SGcqoKG+R34mMcTrixP0zeFm/tBd8T8VEfsGUJVhNZ0oFIIQjtm7rfDuOvtFJXlDECMpXh v3N1XA6yar0xxFY3S3MvZo9Wqr0YtXA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-519-RE_ZAWRJPYCu9p5QrirYcw-1; Tue, 06 Dec 2022 11:02:51 -0500 X-MC-Unique: RE_ZAWRJPYCu9p5QrirYcw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 15D20833AF4; Tue, 6 Dec 2022 16:02:51 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6F8CCC15BA4; Tue, 6 Dec 2022 16:02:50 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 30/32] rxrpc: Change rx_packet tracepoint to display securityIndex not type twice From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:02:47 +0000 Message-ID: <167034256776.1105287.711207190763674876.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481596568411552?= X-GMAIL-MSGID: =?utf-8?q?1751481596568411552?= Change the rx_packet tracepoint to display the securityIndex from the packet header instead of displaying the type in numeric form. There's no need for the latter, as the display of the type in symbolic form will fall back automatically to displaying the hex value if no symbol is available. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 81182efca327..a4c5bc481cfc 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -752,9 +752,8 @@ TRACE_EVENT(rxrpc_rx_packet, __entry->hdr.epoch, __entry->hdr.cid, __entry->hdr.callNumber, __entry->hdr.serviceId, __entry->hdr.serial, __entry->hdr.seq, - __entry->hdr.type, __entry->hdr.flags, - __entry->hdr.type <= 15 ? - __print_symbolic(__entry->hdr.type, rxrpc_pkts) : "?UNK") + __entry->hdr.securityIndex, __entry->hdr.flags, + __print_symbolic(__entry->hdr.type, rxrpc_pkts)) ); TRACE_EVENT(rxrpc_rx_done, From patchwork Tue Dec 6 16:02:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30423 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2909750wrr; Tue, 6 Dec 2022 08:10:23 -0800 (PST) X-Google-Smtp-Source: AA0mqf56VJE54xLcaHvwBKinLHI6s2qHv656gmhI6v7Zz0Piv5tBPKHa2p4FV83LW3vJd6y5iyvu X-Received: by 2002:a17:902:848d:b0:188:f42d:5b1a with SMTP id c13-20020a170902848d00b00188f42d5b1amr70844356plo.26.1670343023186; Tue, 06 Dec 2022 08:10:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670343023; cv=none; d=google.com; s=arc-20160816; b=pBLLB05aWkUiP2bC4ji9F7sNdnjtPV6ZHjk85Upjz8Atslu6A6Zno7BZO2Bwd3IOhY UAdlE1dWHb3IztSkhs1VbmNy1HzAA2K3JXcvunhBMfAjQ+rVo1T91jQaprdB8UtPCKA7 vJAT3HUlX2xBh2/bw3Y+1tKHFuWt+SEWVm9a8zq+fgsIjD0IuVMhmtdKSEcgUbkrCFTT hNJEkmDONeVlkQOnorlKG378I4BzOV2fiy8uJd8QxXd8SO40vjsbA+hH1yykZdI3CNwf HPXHCwULSghWhuxrILV/Tf7tim1P/R7HA8RaU8ecTjfb7Pi41vu4stvfiKKW65Fnb0VF HHAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=fdlW53yOUlzvoothoGeRCf1DNdUdd2Eq8nHnqri3HZg=; b=twgznzzIatlScG640vqvt5PnkykHQoXlHYMqw4qmldJ3/RSlib2lsertnCWB3WrltU z01KrVQbuq2mhea2uud0uMa1dKD2SHm7duTuk/MQd/T+7kvRRVGzv7cGVHcJIggR/CdS prc/l0VfZ1Rb+6l3yMvfMMkCxrK7rxLJBiNYGYJf4IjtZELVc/a9+QPQ/EhnBwz1OxiU sXZFpFnl5Np9BiaMGSMF85vzaakThxeU2eDtKjaSyghFji/52K+0koAfhIiBy2o+Lt/Z m9i8vFewlKrrVoLpn1YVH5YNVfVcD8VPYGQkmz4kkTlLbwf4Xmh/tXf2pc33wqsaRPPB s6FQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DJmLcNzk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ju19-20020a170903429300b00188f882599csi16600902plb.58.2022.12.06.08.10.04; Tue, 06 Dec 2022 08:10:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DJmLcNzk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235488AbiLFQI4 (ORCPT + 99 others); Tue, 6 Dec 2022 11:08:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235437AbiLFQIN (ORCPT ); Tue, 6 Dec 2022 11:08:13 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19EAB32B96 for ; Tue, 6 Dec 2022 08:03:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342582; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdlW53yOUlzvoothoGeRCf1DNdUdd2Eq8nHnqri3HZg=; b=DJmLcNzkNFnsJU1E2e3vHa1GwNW/rE1vXoJuZUKUMaoTnti9yyFQmFXSZMx06x1gJpnfld woy2ogZ+EeY9Ej+mpl862UlAY0doMwoiPf+X6nv/EQlYQhK2sFbTk1p2eWjFWmUVxhJWmY KR33+aq/UWGrYpVj4v6Ne+AaS3KkgpU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-489-MDBIRgpRPPa_WTF0I_8tBQ-1; Tue, 06 Dec 2022 11:03:00 -0500 X-MC-Unique: MDBIRgpRPPa_WTF0I_8tBQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DD06A1C08984; Tue, 6 Dec 2022 16:02:59 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id E5D1C40C6EC4; Tue, 6 Dec 2022 16:02:58 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 31/32] rxrpc: Move client call connection to the I/O thread From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:02:56 +0000 Message-ID: <167034257623.1105287.13324829635106861385.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481606111085944?= X-GMAIL-MSGID: =?utf-8?q?1751481606111085944?= Move the connection setup of client calls to the I/O thread so that a whole load of locking and barrierage can be eliminated. This necessitates the app thread waiting for connection to complete before it can begin encrypting data. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- include/trace/events/rxrpc.h | 5 net/rxrpc/ar-internal.h | 22 +- net/rxrpc/call_object.c | 58 ++++- net/rxrpc/call_state.c | 2 net/rxrpc/conn_client.c | 525 ++++++++++-------------------------------- net/rxrpc/conn_event.c | 50 +--- net/rxrpc/conn_object.c | 19 +- net/rxrpc/conn_service.c | 1 net/rxrpc/io_thread.c | 13 + net/rxrpc/local_object.c | 6 net/rxrpc/proc.c | 1 net/rxrpc/rxkad.c | 21 +- net/rxrpc/security.c | 34 +-- net/rxrpc/sendmsg.c | 64 +++++ 14 files changed, 295 insertions(+), 526 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index a4c5bc481cfc..3e228aa2d2b1 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -225,7 +225,6 @@ EM(rxrpc_conn_put_call, "PUT call ") \ EM(rxrpc_conn_put_call_input, "PUT inp-call") \ EM(rxrpc_conn_put_conn_input, "PUT inp-conn") \ - EM(rxrpc_conn_put_discard, "PUT discard ") \ EM(rxrpc_conn_put_discard_idle, "PUT disc-idl") \ EM(rxrpc_conn_put_local_dead, "PUT loc-dead") \ EM(rxrpc_conn_put_noreuse, "PUT noreuse ") \ @@ -247,12 +246,11 @@ EM(rxrpc_client_chan_activate, "ChActv") \ EM(rxrpc_client_chan_disconnect, "ChDisc") \ EM(rxrpc_client_chan_pass, "ChPass") \ - EM(rxrpc_client_chan_wait_failed, "ChWtFl") \ EM(rxrpc_client_cleanup, "Clean ") \ EM(rxrpc_client_discard, "Discar") \ - EM(rxrpc_client_duplicate, "Duplic") \ EM(rxrpc_client_exposed, "Expose") \ EM(rxrpc_client_replace, "Replac") \ + EM(rxrpc_client_queue_new_call, "Q-Call") \ EM(rxrpc_client_to_active, "->Actv") \ E_(rxrpc_client_to_idle, "->Idle") @@ -280,6 +278,7 @@ EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \ EM(rxrpc_call_put_unnotify, "PUT unnotify") \ EM(rxrpc_call_put_userid_exists, "PUT u-exists") \ + EM(rxrpc_call_put_userid, "PUT user-id ") \ EM(rxrpc_call_see_accept, "SEE accept ") \ EM(rxrpc_call_see_activate_client, "SEE act-clnt") \ EM(rxrpc_call_see_connect_failed, "SEE con-fail") \ diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index e79263927a6d..04f6253ae169 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -293,7 +293,6 @@ struct rxrpc_local { struct rb_root client_bundles; /* Client connection bundles by socket params */ spinlock_t client_bundles_lock; /* Lock for client_bundles */ bool kill_all_client_conns; - spinlock_t client_conn_cache_lock; /* Lock for ->*_client_conns */ struct list_head idle_client_conns; struct timer_list client_conn_reap_timer; unsigned long client_conn_flags; @@ -305,7 +304,8 @@ struct rxrpc_local { bool dead; bool service_closed; /* Service socket closed */ struct idr conn_ids; /* List of connection IDs */ - spinlock_t conn_lock; /* Lock for client connection pool */ + struct list_head new_client_calls; /* Newly created client calls need connection */ + spinlock_t client_call_lock; /* Lock for ->new_client_calls */ struct sockaddr_rxrpc srx; /* local address */ }; @@ -386,7 +386,6 @@ enum rxrpc_call_completion { * Bits in the connection flags. */ enum rxrpc_conn_flag { - RXRPC_CONN_HAS_IDR, /* Has a client conn ID assigned */ RXRPC_CONN_IN_SERVICE_CONNS, /* Conn is in peer->service_conns */ RXRPC_CONN_DONT_REUSE, /* Don't reuse this connection */ RXRPC_CONN_PROBING_FOR_UPGRADE, /* Probing for service upgrade */ @@ -414,6 +413,7 @@ enum rxrpc_conn_event { */ enum rxrpc_conn_proto_state { RXRPC_CONN_UNUSED, /* Connection not yet attempted */ + RXRPC_CONN_CLIENT_UNSECURED, /* Client connection needs security init */ RXRPC_CONN_CLIENT, /* Client connection */ RXRPC_CONN_SERVICE_PREALLOC, /* Service connection preallocation */ RXRPC_CONN_SERVICE_UNSECURED, /* Service unsecured connection */ @@ -437,11 +437,9 @@ struct rxrpc_bundle { u32 security_level; /* Security level selected */ u16 service_id; /* Service ID for this connection */ bool try_upgrade; /* True if the bundle is attempting upgrade */ - bool alloc_conn; /* True if someone's getting a conn */ bool exclusive; /* T if conn is exclusive */ bool upgrade; /* T if service ID can be upgraded */ - short alloc_error; /* Error from last conn allocation */ - spinlock_t channel_lock; + unsigned short alloc_error; /* Error from last conn allocation */ struct rb_node local_node; /* Node in local->client_conns */ struct list_head waiting_calls; /* Calls waiting for channels */ unsigned long avail_chans; /* Mask of available channels */ @@ -469,7 +467,7 @@ struct rxrpc_connection { unsigned char act_chans; /* Mask of active channels */ struct rxrpc_channel { unsigned long final_ack_at; /* Time at which to issue final ACK */ - struct rxrpc_call __rcu *call; /* Active call */ + struct rxrpc_call *call; /* Active call */ unsigned int call_debug_id; /* call->debug_id */ u32 call_id; /* ID of current call */ u32 call_counter; /* Call ID counter */ @@ -490,6 +488,7 @@ struct rxrpc_connection { struct list_head link; /* link in master connection list */ struct sk_buff_head rx_queue; /* received conn-level packets */ + struct mutex security_lock; /* Lock for security management */ const struct rxrpc_security *security; /* applied security module */ union { struct { @@ -620,7 +619,7 @@ struct rxrpc_call { struct work_struct destroyer; /* In-process-context destroyer */ rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ struct list_head link; /* link in master call list */ - struct list_head chan_wait_link; /* Link in conn->bundle->waiting_calls */ + struct list_head wait_link; /* Link in local->new_client_calls */ struct hlist_node error_link; /* link in error distribution list */ struct list_head accept_link; /* Link in rx->acceptq */ struct list_head recvmsg_link; /* Link in rx->recvmsg_q */ @@ -867,6 +866,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *, struct sockaddr_rxrpc *, struct rxrpc_call_params *, gfp_t, unsigned int); +void rxrpc_start_call_timer(struct rxrpc_call *); void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *, struct sk_buff *); void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *); @@ -901,6 +901,7 @@ static inline void rxrpc_set_call_state(struct rxrpc_call *call, { /* Order write of completion info before write of ->state. */ smp_store_release(&call->_state, state); + wake_up(&call->waitq); } static inline enum rxrpc_call_state __rxrpc_call_state(const struct rxrpc_call *call) @@ -936,10 +937,11 @@ extern unsigned int rxrpc_reap_client_connections; extern unsigned long rxrpc_conn_idle_client_expiry; extern unsigned long rxrpc_conn_idle_client_fast_expiry; -void rxrpc_destroy_client_conn_ids(struct rxrpc_local *); +void rxrpc_purge_client_connections(struct rxrpc_local *); struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); void rxrpc_put_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace); -int rxrpc_connect_call(struct rxrpc_call *, gfp_t); +int rxrpc_look_up_bundle(struct rxrpc_call *, gfp_t); +void rxrpc_connect_client_calls(struct rxrpc_local *); void rxrpc_expose_client_call(struct rxrpc_call *); void rxrpc_disconnect_client_call(struct rxrpc_bundle *, struct rxrpc_call *); void rxrpc_deactivate_bundle(struct rxrpc_bundle *); diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 88b9c4f5b122..6fed91fa848f 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -150,7 +150,7 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, timer_setup(&call->timer, rxrpc_call_timer_expired, 0); INIT_WORK(&call->destroyer, rxrpc_destroy_call); INIT_LIST_HEAD(&call->link); - INIT_LIST_HEAD(&call->chan_wait_link); + INIT_LIST_HEAD(&call->wait_link); INIT_LIST_HEAD(&call->accept_link); INIT_LIST_HEAD(&call->recvmsg_link); INIT_LIST_HEAD(&call->sock_link); @@ -243,7 +243,7 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, /* * Initiate the call ack/resend/expiry timer. */ -static void rxrpc_start_call_timer(struct rxrpc_call *call) +void rxrpc_start_call_timer(struct rxrpc_call *call) { unsigned long now = jiffies; unsigned long j = now + MAX_JIFFY_OFFSET; @@ -287,6 +287,39 @@ static void rxrpc_put_call_slot(struct rxrpc_call *call) up(limiter); } +/* + * Start the process of connecting a call. We obtain a peer and a connection + * bundle, but the actual association of a call with a connection is offloaded + * to the I/O thread to simplify locking. + */ +static int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp) +{ + struct rxrpc_local *local = call->local; + int ret = 0; + + _enter("{%d,%lx},", call->debug_id, call->user_call_ID); + + call->peer = rxrpc_lookup_peer(local, &call->dest_srx, gfp); + if (!call->peer) + goto error; + + ret = rxrpc_look_up_bundle(call, gfp); + if (ret < 0) + goto error; + + trace_rxrpc_client(NULL, -1, rxrpc_client_queue_new_call); + rxrpc_get_call(call, rxrpc_call_get_io_thread); + spin_lock(&local->client_call_lock); + list_add_tail(&call->wait_link, &local->new_client_calls); + spin_unlock(&local->client_call_lock); + rxrpc_wake_up_io_thread(local); + return 0; + +error: + __set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); + return ret; +} + /* * Set up a call for the given parameters. * - Called with the socket lock held, which it must release. @@ -370,10 +403,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, if (ret < 0) goto error_attached_to_socket; - rxrpc_see_call(call, rxrpc_call_see_connected); - - rxrpc_start_call_timer(call); - _leave(" = %p [new]", call); return call; @@ -388,22 +417,20 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, -EEXIST); trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), 0, rxrpc_call_see_userid_exists); - rxrpc_release_call(rx, call); mutex_unlock(&call->user_mutex); rxrpc_put_call(call, rxrpc_call_put_userid_exists); _leave(" = -EEXIST"); return ERR_PTR(-EEXIST); /* We got an error, but the call is attached to the socket and is in - * need of release. However, we might now race with recvmsg() when - * completing the call queues it. Return 0 from sys_sendmsg() and + * need of release. However, we might now race with recvmsg() when it + * completion notifies the socket. Return 0 from sys_sendmsg() and * leave the error to recvmsg() to deal with. */ error_attached_to_socket: trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), ret, rxrpc_call_see_connect_failed); - set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); - rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, ret); + rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 0, ret); _leave(" = c=%08x [err]", call->debug_id); return call; } @@ -458,7 +485,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, chan = sp->hdr.cid & RXRPC_CHANNELMASK; conn->channels[chan].call_counter = call->call_id; conn->channels[chan].call_id = call->call_id; - rcu_assign_pointer(conn->channels[chan].call, call); + conn->channels[chan].call = call; spin_unlock(&conn->state_lock); spin_lock(&conn->peer->lock); @@ -518,7 +545,7 @@ static void rxrpc_cleanup_ring(struct rxrpc_call *call) void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) { struct rxrpc_connection *conn = call->conn; - bool put = false; + bool put = false, putu = false; _enter("{%d,%d}", call->debug_id, refcount_read(&call->ref)); @@ -553,7 +580,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) if (test_and_clear_bit(RXRPC_CALL_HAS_USERID, &call->flags)) { rb_erase(&call->sock_node, &rx->calls); memset(&call->sock_node, 0xdd, sizeof(call->sock_node)); - rxrpc_put_call(call, rxrpc_call_put_userid_exists); + putu = true; } list_del(&call->sock_link); @@ -561,6 +588,9 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) _debug("RELEASE CALL %p (%d CONN %p)", call, call->debug_id, conn); + if (putu) + rxrpc_put_call(call, rxrpc_call_put_userid); + _leave(""); } diff --git a/net/rxrpc/call_state.c b/net/rxrpc/call_state.c index c1f131618ac4..59a5588805ac 100644 --- a/net/rxrpc/call_state.c +++ b/net/rxrpc/call_state.c @@ -65,5 +65,5 @@ void rxrpc_prefail_call(struct rxrpc_call *call, enum rxrpc_call_completion comp call->completion = compl; call->_state = RXRPC_CALL_COMPLETE; trace_rxrpc_call_complete(call); - __set_bit(RXRPC_CALL_RELEASED, &call->flags); + WARN_ON_ONCE(__test_and_set_bit(RXRPC_CALL_RELEASED, &call->flags)); } diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 6b6b2f72c0ed..6268e9bb9649 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -39,61 +39,19 @@ static void rxrpc_activate_bundle(struct rxrpc_bundle *bundle) atomic_inc(&bundle->active); } -/* - * Get a connection ID and epoch for a client connection from the global pool. - * The connection struct pointer is then recorded in the idr radix tree. The - * epoch doesn't change until the client is rebooted (or, at least, unless the - * module is unloaded). - */ -static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, - gfp_t gfp) -{ - struct rxrpc_local *local = conn->local; - int id; - - _enter(""); - - idr_preload(gfp); - spin_lock(&local->conn_lock); - - id = idr_alloc_cyclic(&local->conn_ids, conn, - 1, 0x40000000, GFP_NOWAIT); - if (id < 0) - goto error; - - spin_unlock(&local->conn_lock); - idr_preload_end(); - - conn->proto.epoch = local->rxnet->epoch; - conn->proto.cid = id << RXRPC_CIDSHIFT; - set_bit(RXRPC_CONN_HAS_IDR, &conn->flags); - _leave(" [CID %x]", conn->proto.cid); - return 0; - -error: - spin_unlock(&local->conn_lock); - idr_preload_end(); - _leave(" = %d", id); - return id; -} - /* * Release a connection ID for a client connection. */ static void rxrpc_put_client_connection_id(struct rxrpc_local *local, struct rxrpc_connection *conn) { - if (test_bit(RXRPC_CONN_HAS_IDR, &conn->flags)) { - spin_lock(&local->conn_lock); - idr_remove(&local->conn_ids, conn->proto.cid >> RXRPC_CIDSHIFT); - spin_unlock(&local->conn_lock); - } + idr_remove(&local->conn_ids, conn->proto.cid >> RXRPC_CIDSHIFT); } /* * Destroy the client connection ID tree. */ -void rxrpc_destroy_client_conn_ids(struct rxrpc_local *local) +static void rxrpc_destroy_client_conn_ids(struct rxrpc_local *local) { struct rxrpc_connection *conn; int id; @@ -129,7 +87,6 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_call *call, bundle->security_level = call->security_level; refcount_set(&bundle->ref, 1); atomic_set(&bundle->active, 1); - spin_lock_init(&bundle->channel_lock); INIT_LIST_HEAD(&bundle->waiting_calls); trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_new); } @@ -169,69 +126,68 @@ void rxrpc_put_bundle(struct rxrpc_bundle *bundle, enum rxrpc_bundle_trace why) } } +/* + * Get rid of outstanding client connection preallocations when a local + * endpoint is destroyed. + */ +void rxrpc_purge_client_connections(struct rxrpc_local *local) +{ + rxrpc_destroy_client_conn_ids(local); +} + /* * Allocate a client connection. */ static struct rxrpc_connection * -rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) +rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle) { struct rxrpc_connection *conn; - struct rxrpc_net *rxnet = bundle->local->rxnet; - int ret; + struct rxrpc_local *local = bundle->local; + struct rxrpc_net *rxnet = local->rxnet; + int id; _enter(""); - conn = rxrpc_alloc_connection(rxnet, gfp); - if (!conn) { - _leave(" = -ENOMEM"); + conn = rxrpc_alloc_connection(rxnet, GFP_ATOMIC | __GFP_NOWARN); + if (!conn) return ERR_PTR(-ENOMEM); + + id = idr_alloc_cyclic(&local->conn_ids, conn, 1, 0x40000000, + GFP_ATOMIC | __GFP_NOWARN); + if (id < 0) { + kfree(conn); + return ERR_PTR(id); } refcount_set(&conn->ref, 1); - conn->bundle = bundle; - conn->local = bundle->local; - conn->peer = bundle->peer; - conn->key = bundle->key; + conn->proto.cid = id << RXRPC_CIDSHIFT; + conn->proto.epoch = local->rxnet->epoch; + conn->out_clientflag = RXRPC_CLIENT_INITIATED; + conn->bundle = rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_conn); + conn->local = rxrpc_get_local(bundle->local, rxrpc_local_get_client_conn); + conn->peer = rxrpc_get_peer(bundle->peer, rxrpc_peer_get_client_conn); + conn->key = key_get(bundle->key); + conn->security = bundle->security; conn->exclusive = bundle->exclusive; conn->upgrade = bundle->upgrade; conn->orig_service_id = bundle->service_id; conn->security_level = bundle->security_level; - conn->out_clientflag = RXRPC_CLIENT_INITIATED; - conn->state = RXRPC_CONN_CLIENT; + conn->state = RXRPC_CONN_CLIENT_UNSECURED; conn->service_id = conn->orig_service_id; - ret = rxrpc_get_client_connection_id(conn, gfp); - if (ret < 0) - goto error_0; - - ret = rxrpc_init_client_conn_security(conn); - if (ret < 0) - goto error_1; + if (conn->security == &rxrpc_no_security) + conn->state = RXRPC_CONN_CLIENT; atomic_inc(&rxnet->nr_conns); write_lock(&rxnet->conn_lock); list_add_tail(&conn->proc_link, &rxnet->conn_proc_list); write_unlock(&rxnet->conn_lock); - rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_conn); - rxrpc_get_peer(conn->peer, rxrpc_peer_get_client_conn); - rxrpc_get_local(conn->local, rxrpc_local_get_client_conn); - key_get(conn->key); - - trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref), - rxrpc_conn_new_client); + rxrpc_see_connection(conn, rxrpc_conn_new_client); atomic_inc(&rxnet->nr_client_conns); trace_rxrpc_client(conn, -1, rxrpc_client_alloc); - _leave(" = %p", conn); return conn; - -error_1: - rxrpc_put_client_connection_id(bundle->local, conn); -error_0: - kfree(conn); - _leave(" = %d", ret); - return ERR_PTR(ret); } /* @@ -249,7 +205,8 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn) if (test_bit(RXRPC_CONN_DONT_REUSE, &conn->flags)) goto dont_reuse; - if (conn->state != RXRPC_CONN_CLIENT || + if ((conn->state != RXRPC_CONN_CLIENT_UNSECURED && + conn->state != RXRPC_CONN_CLIENT) || conn->proto.epoch != rxnet->epoch) goto mark_dont_reuse; @@ -280,7 +237,7 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn) * Look up the conn bundle that matches the connection parameters, adding it if * it doesn't yet exist. */ -static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t gfp) +int rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t gfp) { static atomic_t rxrpc_bundle_id; struct rxrpc_bundle *bundle, *candidate; @@ -295,7 +252,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t if (test_bit(RXRPC_CALL_EXCLUSIVE, &call->flags)) { call->bundle = rxrpc_alloc_bundle(call, gfp); - return call->bundle; + return call->bundle ? 0 : -ENOMEM; } /* First, see if the bundle is already there. */ @@ -324,7 +281,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t /* It wasn't. We need to add one. */ candidate = rxrpc_alloc_bundle(call, gfp); if (!candidate) - return ERR_PTR(-ENOMEM); + return -ENOMEM; _debug("search 2"); spin_lock(&local->client_bundles_lock); @@ -355,7 +312,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t call->bundle = rxrpc_get_bundle(candidate, rxrpc_bundle_get_client_call); spin_unlock(&local->client_bundles_lock); _leave(" = B=%u [new]", call->bundle->debug_id); - return call->bundle; + return 0; found_bundle_free: rxrpc_free_bundle(candidate); @@ -364,160 +321,77 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t rxrpc_activate_bundle(bundle); spin_unlock(&local->client_bundles_lock); _leave(" = B=%u [found]", call->bundle->debug_id); - return call->bundle; -} - -/* - * Create or find a client bundle to use for a call. - * - * If we return with a connection, the call will be on its waiting list. It's - * left to the caller to assign a channel and wake up the call. - */ -static struct rxrpc_bundle *rxrpc_prep_call(struct rxrpc_call *call, gfp_t gfp) -{ - struct rxrpc_bundle *bundle; - - _enter("{%d,%lx},", call->debug_id, call->user_call_ID); - - call->peer = rxrpc_lookup_peer(call->local, &call->dest_srx, gfp); - if (!call->peer) - goto error; - - call->tx_last_sent = ktime_get_real(); - call->cong_ssthresh = call->peer->cong_ssthresh; - if (call->cong_cwnd >= call->cong_ssthresh) - call->cong_mode = RXRPC_CALL_CONGEST_AVOIDANCE; - else - call->cong_mode = RXRPC_CALL_SLOW_START; - - /* Find the client connection bundle. */ - bundle = rxrpc_look_up_bundle(call, gfp); - if (!bundle) - goto error; - - /* Get this call queued. Someone else may activate it whilst we're - * lining up a new connection, but that's fine. - */ - spin_lock(&bundle->channel_lock); - list_add_tail(&call->chan_wait_link, &bundle->waiting_calls); - spin_unlock(&bundle->channel_lock); - - _leave(" = [B=%x]", bundle->debug_id); - return bundle; - -error: - _leave(" = -ENOMEM"); - return ERR_PTR(-ENOMEM); + return 0; } /* * Allocate a new connection and add it into a bundle. */ -static void rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, gfp_t gfp) - __releases(bundle->channel_lock) +static bool rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, + unsigned int slot) { - struct rxrpc_connection *candidate = NULL, *old = NULL; - bool conflict; - int i; - - _enter(""); - - conflict = bundle->alloc_conn; - if (!conflict) - bundle->alloc_conn = true; - spin_unlock(&bundle->channel_lock); - if (conflict) { - _leave(" [conf]"); - return; - } - - candidate = rxrpc_alloc_client_connection(bundle, gfp); - - spin_lock(&bundle->channel_lock); - bundle->alloc_conn = false; - - if (IS_ERR(candidate)) { - bundle->alloc_error = PTR_ERR(candidate); - spin_unlock(&bundle->channel_lock); - _leave(" [err %ld]", PTR_ERR(candidate)); - return; - } - - bundle->alloc_error = 0; + struct rxrpc_connection *conn, *old; + unsigned int shift = slot * RXRPC_MAXCALLS; + unsigned int i; - for (i = 0; i < ARRAY_SIZE(bundle->conns); i++) { - unsigned int shift = i * RXRPC_MAXCALLS; - int j; - - old = bundle->conns[i]; - if (!rxrpc_may_reuse_conn(old)) { - if (old) - trace_rxrpc_client(old, -1, rxrpc_client_replace); - candidate->bundle_shift = shift; - rxrpc_activate_bundle(bundle); - bundle->conns[i] = candidate; - for (j = 0; j < RXRPC_MAXCALLS; j++) - set_bit(shift + j, &bundle->avail_chans); - candidate = NULL; - break; - } - - old = NULL; + old = bundle->conns[slot]; + if (old) { + bundle->conns[slot] = NULL; + trace_rxrpc_client(old, -1, rxrpc_client_replace); + rxrpc_put_connection(old, rxrpc_conn_put_noreuse); } - spin_unlock(&bundle->channel_lock); - - if (candidate) { - _debug("discard C=%x", candidate->debug_id); - trace_rxrpc_client(candidate, -1, rxrpc_client_duplicate); - rxrpc_put_connection(candidate, rxrpc_conn_put_discard); + conn = rxrpc_alloc_client_connection(bundle); + if (IS_ERR(conn)) { + bundle->alloc_error = PTR_ERR(conn); + return false; } - rxrpc_put_connection(old, rxrpc_conn_put_noreuse); - _leave(""); + rxrpc_activate_bundle(bundle); + conn->bundle_shift = shift; + bundle->conns[slot] = conn; + for (i = 0; i < RXRPC_MAXCALLS; i++) + set_bit(shift + i, &bundle->avail_chans); + return true; } /* * Add a connection to a bundle if there are no usable connections or we have * connections waiting for extra capacity. */ -static void rxrpc_maybe_add_conn(struct rxrpc_bundle *bundle, gfp_t gfp) +static bool rxrpc_bundle_has_space(struct rxrpc_bundle *bundle) { - struct rxrpc_call *call; - int i, usable; + int slot = -1, i, usable; _enter(""); - spin_lock(&bundle->channel_lock); + bundle->alloc_error = 0; /* See if there are any usable connections. */ usable = 0; - for (i = 0; i < ARRAY_SIZE(bundle->conns); i++) + for (i = 0; i < ARRAY_SIZE(bundle->conns); i++) { if (rxrpc_may_reuse_conn(bundle->conns[i])) usable++; - - if (!usable && !list_empty(&bundle->waiting_calls)) { - call = list_first_entry(&bundle->waiting_calls, - struct rxrpc_call, chan_wait_link); - if (test_bit(RXRPC_CALL_UPGRADE, &call->flags)) - bundle->try_upgrade = true; + else if (slot == -1) + slot = i; } + if (!usable && bundle->upgrade) + bundle->try_upgrade = true; + if (!usable) goto alloc_conn; if (!bundle->avail_chans && !bundle->try_upgrade && - !list_empty(&bundle->waiting_calls) && usable < ARRAY_SIZE(bundle->conns)) goto alloc_conn; - spin_unlock(&bundle->channel_lock); _leave(""); - return; + return usable; alloc_conn: - return rxrpc_add_conn_to_bundle(bundle, gfp); + return slot >= 0 ? rxrpc_add_conn_to_bundle(bundle, slot) : false; } /* @@ -531,11 +405,13 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, struct rxrpc_channel *chan = &conn->channels[channel]; struct rxrpc_bundle *bundle = conn->bundle; struct rxrpc_call *call = list_entry(bundle->waiting_calls.next, - struct rxrpc_call, chan_wait_link); + struct rxrpc_call, wait_link); u32 call_id = chan->call_counter + 1; _enter("C=%x,%u", conn->debug_id, channel); + list_del_init(&call->wait_link); + trace_rxrpc_client(conn, channel, rxrpc_client_chan_activate); /* Cancel the final ACK on the previous call if it hasn't been sent yet @@ -545,65 +421,50 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, clear_bit(conn->bundle_shift + channel, &bundle->avail_chans); rxrpc_see_call(call, rxrpc_call_see_activate_client); - list_del_init(&call->chan_wait_link); call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call); call->cid = conn->proto.cid | channel; call->call_id = call_id; call->dest_srx.srx_service = conn->service_id; - - trace_rxrpc_connect_call(call); - - rxrpc_set_call_state(call, RXRPC_CALL_CLIENT_SEND_REQUEST); - - /* Paired with the read barrier in rxrpc_connect_call(). This orders - * cid and epoch in the connection wrt to call_id without the need to - * take the channel_lock. - * - * We provisionally assign a callNumber at this point, but we don't - * confirm it until the call is about to be exposed. - * - * TODO: Pair with a barrier in the data_ready handler when that looks - * at the call ID through a connection channel. - */ - smp_wmb(); + call->cong_ssthresh = call->peer->cong_ssthresh; + if (call->cong_cwnd >= call->cong_ssthresh) + call->cong_mode = RXRPC_CALL_CONGEST_AVOIDANCE; + else + call->cong_mode = RXRPC_CALL_SLOW_START; chan->call_id = call_id; chan->call_debug_id = call->debug_id; - rcu_assign_pointer(chan->call, call); + chan->call = call; + + rxrpc_see_call(call, rxrpc_call_see_connected); + trace_rxrpc_connect_call(call); + call->tx_last_sent = ktime_get_real(); + rxrpc_start_call_timer(call); + rxrpc_set_call_state(call, RXRPC_CALL_CLIENT_SEND_REQUEST); wake_up(&call->waitq); } /* * Remove a connection from the idle list if it's on it. */ -static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connection *conn) +static void rxrpc_unidle_conn(struct rxrpc_connection *conn) { - struct rxrpc_local *local = bundle->local; - bool drop_ref; - if (!list_empty(&conn->cache_link)) { - drop_ref = false; - spin_lock(&local->client_conn_cache_lock); - if (!list_empty(&conn->cache_link)) { - list_del_init(&conn->cache_link); - drop_ref = true; - } - spin_unlock(&local->client_conn_cache_lock); - if (drop_ref) - rxrpc_put_connection(conn, rxrpc_conn_put_unidle); + list_del_init(&conn->cache_link); + rxrpc_put_connection(conn, rxrpc_conn_put_unidle); } } /* - * Assign channels and callNumbers to waiting calls with channel_lock - * held by caller. + * Assign channels and callNumbers to waiting calls. */ -static void rxrpc_activate_channels_locked(struct rxrpc_bundle *bundle) +static void rxrpc_activate_channels(struct rxrpc_bundle *bundle) { struct rxrpc_connection *conn; unsigned long avail, mask; unsigned int channel, slot; + trace_rxrpc_client(NULL, -1, rxrpc_client_activate_chans); + if (bundle->try_upgrade) mask = 1; else @@ -623,7 +484,7 @@ static void rxrpc_activate_channels_locked(struct rxrpc_bundle *bundle) if (bundle->try_upgrade) set_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags); - rxrpc_unidle_conn(bundle, conn); + rxrpc_unidle_conn(conn); channel &= (RXRPC_MAXCALLS - 1); conn->act_chans |= 1 << channel; @@ -632,125 +493,24 @@ static void rxrpc_activate_channels_locked(struct rxrpc_bundle *bundle) } /* - * Assign channels and callNumbers to waiting calls. + * Connect waiting channels (called from the I/O thread). */ -static void rxrpc_activate_channels(struct rxrpc_bundle *bundle) -{ - _enter("B=%x", bundle->debug_id); - - trace_rxrpc_client(NULL, -1, rxrpc_client_activate_chans); - - if (!bundle->avail_chans) - return; - - spin_lock(&bundle->channel_lock); - rxrpc_activate_channels_locked(bundle); - spin_unlock(&bundle->channel_lock); - _leave(""); -} - -/* - * Wait for a callNumber and a channel to be granted to a call. - */ -static int rxrpc_wait_for_channel(struct rxrpc_bundle *bundle, - struct rxrpc_call *call, gfp_t gfp) +void rxrpc_connect_client_calls(struct rxrpc_local *local) { - DECLARE_WAITQUEUE(myself, current); - int ret = 0; - - _enter("%d", call->debug_id); - - if (!gfpflags_allow_blocking(gfp)) { - rxrpc_maybe_add_conn(bundle, gfp); - rxrpc_activate_channels(bundle); - ret = bundle->alloc_error ?: -EAGAIN; - goto out; - } - - add_wait_queue_exclusive(&call->waitq, &myself); - for (;;) { - rxrpc_maybe_add_conn(bundle, gfp); - rxrpc_activate_channels(bundle); - ret = bundle->alloc_error; - if (ret < 0) - break; - - switch (call->interruptibility) { - case RXRPC_INTERRUPTIBLE: - case RXRPC_PREINTERRUPTIBLE: - set_current_state(TASK_INTERRUPTIBLE); - break; - case RXRPC_UNINTERRUPTIBLE: - default: - set_current_state(TASK_UNINTERRUPTIBLE); - break; - } - if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) - break; - if ((call->interruptibility == RXRPC_INTERRUPTIBLE || - call->interruptibility == RXRPC_PREINTERRUPTIBLE) && - signal_pending(current)) { - ret = -ERESTARTSYS; - break; - } - schedule(); - } - remove_wait_queue(&call->waitq, &myself); - __set_current_state(TASK_RUNNING); - -out: - _leave(" = %d", ret); - return ret; -} - -/* - * find a connection for a call - * - called in process context with IRQs enabled - */ -int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp) -{ - struct rxrpc_bundle *bundle; - int ret = 0; - - _enter("{%d,%lx},", call->debug_id, call->user_call_ID); - - rxrpc_get_call(call, rxrpc_call_get_io_thread); - - bundle = rxrpc_prep_call(call, gfp); - if (IS_ERR(bundle)) { - rxrpc_put_call(call, rxrpc_call_get_io_thread); - ret = PTR_ERR(bundle); - goto out; - } - - if (rxrpc_call_state(call) == RXRPC_CALL_CLIENT_AWAIT_CONN) { - ret = rxrpc_wait_for_channel(bundle, call, gfp); - if (ret < 0) - goto wait_failed; - } - -granted_channel: - /* Paired with the write barrier in rxrpc_activate_one_channel(). */ - smp_rmb(); + struct rxrpc_call *call; -out: - _leave(" = %d", ret); - return ret; + while ((call = list_first_entry_or_null(&local->new_client_calls, + struct rxrpc_call, wait_link)) + ) { + struct rxrpc_bundle *bundle = call->bundle; -wait_failed: - spin_lock(&bundle->channel_lock); - list_del_init(&call->chan_wait_link); - spin_unlock(&bundle->channel_lock); + spin_lock(&local->client_call_lock); + list_move_tail(&call->wait_link, &bundle->waiting_calls); + spin_unlock(&local->client_call_lock); - if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) { - ret = 0; - goto granted_channel; + if (rxrpc_bundle_has_space(bundle)) + rxrpc_activate_channels(bundle); } - - trace_rxrpc_client(call->conn, ret, rxrpc_client_chan_wait_failed); - rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 0, ret); - rxrpc_disconnect_client_call(bundle, call); - goto out; } /* @@ -808,7 +568,6 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call _enter("c=%x", call->debug_id); - spin_lock(&bundle->channel_lock); set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); /* Calls that have never actually been assigned a channel can simply be @@ -819,7 +578,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call _debug("call is waiting"); ASSERTCMP(call->call_id, ==, 0); ASSERT(!test_bit(RXRPC_CALL_EXPOSED, &call->flags)); - list_del_init(&call->chan_wait_link); + list_del_init(&call->wait_link); goto out; } @@ -828,10 +587,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call chan = &conn->channels[channel]; trace_rxrpc_client(conn, channel, rxrpc_client_chan_disconnect); - if (rcu_access_pointer(chan->call) != call) { - spin_unlock(&bundle->channel_lock); - BUG(); - } + BUG_ON(chan->call != call); may_reuse = rxrpc_may_reuse_conn(conn); @@ -852,9 +608,8 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call trace_rxrpc_client(conn, channel, rxrpc_client_to_active); bundle->try_upgrade = false; if (may_reuse) - rxrpc_activate_channels_locked(bundle); + rxrpc_activate_channels(bundle); } - } /* See if we can pass the channel directly to another call. */ @@ -879,7 +634,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call } /* Deactivate the channel. */ - rcu_assign_pointer(chan->call, NULL); + chan->call = NULL; set_bit(conn->bundle_shift + channel, &conn->bundle->avail_chans); conn->act_chans &= ~(1 << channel); @@ -892,15 +647,12 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call conn->idle_timestamp = jiffies; rxrpc_get_connection(conn, rxrpc_conn_get_idle); - spin_lock(&local->client_conn_cache_lock); list_move_tail(&conn->cache_link, &local->idle_client_conns); - spin_unlock(&local->client_conn_cache_lock); rxrpc_set_client_reap_timer(local); } out: - spin_unlock(&bundle->channel_lock); rxrpc_put_call(call, rxrpc_call_put_io_thread); _leave(""); return; @@ -913,7 +665,6 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn) { struct rxrpc_bundle *bundle = conn->bundle; unsigned int bindex; - bool need_drop = false; int i; _enter("C=%x", conn->debug_id); @@ -921,18 +672,13 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn) if (conn->flags & RXRPC_CONN_FINAL_ACK_MASK) rxrpc_process_delayed_final_acks(conn, true); - spin_lock(&bundle->channel_lock); bindex = conn->bundle_shift / RXRPC_MAXCALLS; if (bundle->conns[bindex] == conn) { _debug("clear slot %u", bindex); bundle->conns[bindex] = NULL; for (i = 0; i < RXRPC_MAXCALLS; i++) clear_bit(conn->bundle_shift + i, &bundle->avail_chans); - need_drop = true; - } - spin_unlock(&bundle->channel_lock); - - if (need_drop) { + rxrpc_put_client_connection_id(bundle->local, conn); rxrpc_deactivate_bundle(bundle); rxrpc_put_connection(conn, rxrpc_conn_put_unbundle); } @@ -994,24 +740,16 @@ void rxrpc_discard_expired_client_conns(struct rxrpc_local *local) _enter(""); - if (list_empty(&local->idle_client_conns)) { - _leave(" [empty]"); - return; - } - /* We keep an estimate of what the number of conns ought to be after * we've discarded some so that we don't overdo the discarding. */ nr_conns = atomic_read(&local->rxnet->nr_client_conns); next: - spin_lock(&local->client_conn_cache_lock); - - if (list_empty(&local->idle_client_conns)) - goto out; - - conn = list_entry(local->idle_client_conns.next, - struct rxrpc_connection, cache_link); + conn = list_first_entry_or_null(&local->idle_client_conns, + struct rxrpc_connection, cache_link); + if (!conn) + return; if (!local->kill_all_client_conns) { /* If the number of connections is over the reap limit, we @@ -1036,8 +774,6 @@ void rxrpc_discard_expired_client_conns(struct rxrpc_local *local) trace_rxrpc_client(conn, -1, rxrpc_client_discard); list_del_init(&conn->cache_link); - spin_unlock(&local->client_conn_cache_lock); - rxrpc_unbundle_conn(conn); /* Drop the ->cache_link ref */ rxrpc_put_connection(conn, rxrpc_conn_put_discard_idle); @@ -1057,8 +793,6 @@ void rxrpc_discard_expired_client_conns(struct rxrpc_local *local) if (!local->kill_all_client_conns) timer_reduce(&local->client_conn_reap_timer, conn_expires_at); -out: - spin_unlock(&local->client_conn_cache_lock); _leave(""); } @@ -1067,34 +801,19 @@ void rxrpc_discard_expired_client_conns(struct rxrpc_local *local) */ void rxrpc_clean_up_local_conns(struct rxrpc_local *local) { - struct rxrpc_connection *conn, *tmp; - LIST_HEAD(graveyard); + struct rxrpc_connection *conn; _enter(""); - spin_lock(&local->client_conn_cache_lock); local->kill_all_client_conns = true; - spin_unlock(&local->client_conn_cache_lock); del_timer_sync(&local->client_conn_reap_timer); - spin_lock(&local->client_conn_cache_lock); - - list_for_each_entry_safe(conn, tmp, &local->idle_client_conns, - cache_link) { - if (conn->local == local) { - atomic_dec(&conn->active); - trace_rxrpc_client(conn, -1, rxrpc_client_discard); - list_move(&conn->cache_link, &graveyard); - } - } - - spin_unlock(&local->client_conn_cache_lock); - - while (!list_empty(&graveyard)) { - conn = list_entry(graveyard.next, - struct rxrpc_connection, cache_link); + while ((conn = list_first_entry_or_null(&local->idle_client_conns, + struct rxrpc_connection, cache_link))) { list_del_init(&conn->cache_link); + atomic_dec(&conn->active); + trace_rxrpc_client(conn, -1, rxrpc_client_discard); rxrpc_unbundle_conn(conn); rxrpc_put_connection(conn, rxrpc_conn_put_local_dead); } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 8009d7e62ae6..c8e7e1081020 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -100,9 +100,7 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, /* If the last call got moved on whilst we were waiting to run, just * ignore this packet. */ - call_id = READ_ONCE(chan->last_call); - /* Sync with __rxrpc_disconnect_call() */ - smp_rmb(); + call_id = chan->last_call; if (skb && call_id != sp->hdr.callNumber) return; @@ -119,9 +117,12 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, iov[2].iov_base = &ack_info; iov[2].iov_len = sizeof(ack_info); + serial = atomic_inc_return(&conn->serial); + pkt.whdr.epoch = htonl(conn->proto.epoch); pkt.whdr.cid = htonl(conn->proto.cid | channel); pkt.whdr.callNumber = htonl(call_id); + pkt.whdr.serial = htonl(serial); pkt.whdr.seq = 0; pkt.whdr.type = chan->last_type; pkt.whdr.flags = conn->out_clientflag; @@ -158,31 +159,15 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, iov[0].iov_len += sizeof(pkt.ack); len += sizeof(pkt.ack) + 3 + sizeof(ack_info); ioc = 3; - break; - - default: - return; - } - - /* Resync with __rxrpc_disconnect_call() and check that the last call - * didn't get advanced whilst we were filling out the packets. - */ - smp_rmb(); - if (READ_ONCE(chan->last_call) != call_id) - return; - - serial = atomic_inc_return(&conn->serial); - pkt.whdr.serial = htonl(serial); - switch (chan->last_type) { - case RXRPC_PACKET_TYPE_ABORT: - break; - case RXRPC_PACKET_TYPE_ACK: trace_rxrpc_tx_ack(chan->call_debug_id, serial, ntohl(pkt.ack.firstPacket), ntohl(pkt.ack.serial), pkt.ack.reason, 0); break; + + default: + return; } ret = kernel_sendmsg(conn->local->socket, &msg, iov, ioc, len); @@ -207,20 +192,14 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn) _enter("{%d},%x", conn->debug_id, conn->abort_code); - spin_lock(&conn->bundle->channel_lock); - for (i = 0; i < RXRPC_MAXCALLS; i++) { - call = rcu_dereference_protected( - conn->channels[i].call, - lockdep_is_held(&conn->bundle->channel_lock)); - if (call) + if ((call = conn->channels[i].call)) rxrpc_set_call_completion(call, conn->completion, conn->abort_code, conn->error); } - spin_unlock(&conn->bundle->channel_lock); _leave(""); } @@ -316,9 +295,7 @@ void rxrpc_process_delayed_final_acks(struct rxrpc_connection *conn, bool force) if (!test_bit(RXRPC_CONN_FINAL_ACK_0 + channel, &conn->flags)) continue; - smp_rmb(); /* vs rxrpc_disconnect_client_call */ - ack_at = READ_ONCE(chan->final_ack_at); - + ack_at = chan->final_ack_at; if (time_before(j, ack_at) && !force) { if (time_before(ack_at, next_j)) { next_j = ack_at; @@ -446,15 +423,8 @@ void rxrpc_input_conn_event(struct rxrpc_connection *conn, struct sk_buff *skb) if (conn->state != RXRPC_CONN_SERVICE) break; - spin_lock(&conn->bundle->channel_lock); - for (loop = 0; loop < RXRPC_MAXCALLS; loop++) - rxrpc_call_is_secure( - rcu_dereference_protected( - conn->channels[loop].call, - lockdep_is_held(&conn->bundle->channel_lock))); - - spin_unlock(&conn->bundle->channel_lock); + rxrpc_call_is_secure(conn->channels[loop].call); break; } diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index ed591a1f82c2..c420a5e0cdd6 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -67,6 +67,7 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet, INIT_WORK(&conn->destructor, rxrpc_clean_up_connection); INIT_LIST_HEAD(&conn->proc_link); INIT_LIST_HEAD(&conn->link); + mutex_init(&conn->security_lock); skb_queue_head_init(&conn->rx_queue); conn->rxnet = rxnet; conn->security = &rxrpc_no_security; @@ -157,7 +158,7 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn, _enter("%d,%x", conn->debug_id, call->cid); - if (rcu_access_pointer(chan->call) == call) { + if (chan->call == call) { /* Save the result of the call so that we can repeat it if necessary * through the channel, whilst disposing of the actual call record. */ @@ -177,12 +178,9 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn, break; } - /* Sync with rxrpc_conn_retransmit(). */ - smp_wmb(); chan->last_call = chan->call_id; chan->call_id = chan->call_counter; - - rcu_assign_pointer(chan->call, NULL); + chan->call = NULL; } _leave(""); @@ -207,10 +205,7 @@ void rxrpc_disconnect_call(struct rxrpc_call *call) if (rxrpc_is_client_call(call)) return rxrpc_disconnect_client_call(conn->bundle, call); - spin_lock(&conn->bundle->channel_lock); __rxrpc_disconnect_call(conn, call); - spin_unlock(&conn->bundle->channel_lock); - set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); conn->idle_timestamp = jiffies; if (atomic_dec_and_test(&conn->active)) @@ -311,10 +306,10 @@ static void rxrpc_clean_up_connection(struct work_struct *work) container_of(work, struct rxrpc_connection, destructor); struct rxrpc_net *rxnet = conn->rxnet; - ASSERT(!rcu_access_pointer(conn->channels[0].call) && - !rcu_access_pointer(conn->channels[1].call) && - !rcu_access_pointer(conn->channels[2].call) && - !rcu_access_pointer(conn->channels[3].call)); + ASSERT(!conn->channels[0].call && + !conn->channels[1].call && + !conn->channels[2].call && + !conn->channels[3].call); ASSERT(list_empty(&conn->cache_link)); del_timer_sync(&conn->timer); diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index 2a55a88b2a5b..f30323de82bd 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -11,7 +11,6 @@ static struct rxrpc_bundle rxrpc_service_dummy_bundle = { .ref = REFCOUNT_INIT(1), .debug_id = UINT_MAX, - .channel_lock = __SPIN_LOCK_UNLOCKED(&rxrpc_service_dummy_bundle.channel_lock), }; /* diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 6153ea162153..4990ab410b4d 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -376,10 +376,7 @@ static int rxrpc_input_packet_on_conn(struct rxrpc_connection *conn, return just_discard; } - rcu_read_lock(); - call = rxrpc_try_get_call(rcu_dereference(chan->call), - rxrpc_call_get_input); - rcu_read_unlock(); + call = rxrpc_try_get_call(chan->call, rxrpc_call_get_input); if (sp->hdr.callNumber > chan->call_id) { if (rxrpc_to_client(sp)) @@ -443,6 +440,9 @@ int rxrpc_io_thread(void *data) &local->client_conn_flags)) rxrpc_discard_expired_client_conns(local); + if (!list_empty(&local->new_client_calls)) + rxrpc_connect_client_calls(local); + /* Deal with calls that want immediate attention. */ if ((call = list_first_entry_or_null(&local->call_attend_q, struct rxrpc_call, @@ -506,7 +506,10 @@ int rxrpc_io_thread(void *data) set_current_state(TASK_INTERRUPTIBLE); if (!skb_queue_empty(&local->rx_queue) || !list_empty(&local->call_attend_q) || - !list_empty(&local->conn_attend_q)) { + !list_empty(&local->conn_attend_q) || + !list_empty(&local->new_client_calls) || + test_bit(RXRPC_CLIENT_CONN_REAP_TIMER, + &local->client_conn_flags)) { __set_current_state(TASK_RUNNING); continue; } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 6a463e90cd8b..ca3e89684d8b 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -118,7 +118,6 @@ static struct rxrpc_local *rxrpc_alloc_local(struct net *net, local->client_bundles = RB_ROOT; spin_lock_init(&local->client_bundles_lock); local->kill_all_client_conns = false; - spin_lock_init(&local->client_conn_cache_lock); INIT_LIST_HEAD(&local->idle_client_conns); timer_setup(&local->client_conn_reap_timer, rxrpc_client_conn_reap_timeout, 0); @@ -134,7 +133,8 @@ static struct rxrpc_local *rxrpc_alloc_local(struct net *net, if (tmp == 0) tmp = 1; idr_set_cursor(&local->conn_ids, tmp); - spin_lock_init(&local->conn_lock); + INIT_LIST_HEAD(&local->new_client_calls); + spin_lock_init(&local->client_call_lock); trace_rxrpc_local(local->debug_id, rxrpc_local_new, 1, 1); } @@ -437,7 +437,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) rxrpc_purge_queue(&local->rx_delay_queue); #endif rxrpc_purge_queue(&local->rx_queue); - rxrpc_destroy_client_conn_ids(local); + rxrpc_purge_client_connections(local); } /* diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index e5cbe88daaa5..fdb0e11a583e 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -12,6 +12,7 @@ static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = { [RXRPC_CONN_UNUSED] = "Unused ", + [RXRPC_CONN_CLIENT_UNSECURED] = "ClUnsec ", [RXRPC_CONN_CLIENT] = "Client ", [RXRPC_CONN_SERVICE_PREALLOC] = "SvPrealc", [RXRPC_CONN_SERVICE_UNSECURED] = "SvUnsec ", diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index 9abf6bb56b65..82fee7c47481 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -1122,36 +1122,31 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, goto protocol_error_free; } - spin_lock(&conn->bundle->channel_lock); for (i = 0; i < RXRPC_MAXCALLS; i++) { - struct rxrpc_call *call; u32 call_id = ntohl(response->encrypted.call_id[i]); + u32 counter = READ_ONCE(conn->channels[i].call_counter); if (call_id > INT_MAX) { rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, rxkad_abort_resp_bad_callid); - goto protocol_error_unlock; + goto protocol_error_free; } - if (call_id < conn->channels[i].call_counter) { + if (call_id < counter) { rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, rxkad_abort_resp_call_ctr); - goto protocol_error_unlock; + goto protocol_error_free; } - if (call_id > conn->channels[i].call_counter) { - call = rcu_dereference_protected( - conn->channels[i].call, - lockdep_is_held(&conn->bundle->channel_lock)); - if (call && !__rxrpc_call_is_complete(call)) { + if (call_id > counter) { + if (conn->channels[i].call) { rxrpc_abort_conn(conn, skb, RXKADSEALEDINCON, -EPROTO, rxkad_abort_resp_call_state); - goto protocol_error_unlock; + goto protocol_error_free; } conn->channels[i].call_counter = call_id; } } - spin_unlock(&conn->bundle->channel_lock); if (ntohl(response->encrypted.inc_nonce) != conn->rxkad.nonce + 1) { rxrpc_abort_conn(conn, skb, RXKADOUTOFSEQUENCE, -EPROTO, @@ -1179,8 +1174,6 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, _leave(" = 0"); return 0; -protocol_error_unlock: - spin_unlock(&conn->bundle->channel_lock); protocol_error_free: kfree(ticket); protocol_error: diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c index 78af14694618..fcb0846406c0 100644 --- a/net/rxrpc/security.c +++ b/net/rxrpc/security.c @@ -97,38 +97,32 @@ int rxrpc_init_client_call_security(struct rxrpc_call *call) */ int rxrpc_init_client_conn_security(struct rxrpc_connection *conn) { - const struct rxrpc_security *sec; struct rxrpc_key_token *token; struct key *key = conn->key; - int ret; + int ret = 0; _enter("{%d},{%x}", conn->debug_id, key_serial(key)); - if (!key) - return 0; - - ret = key_validate(key); - if (ret < 0) - return ret; - for (token = key->payload.data[0]; token; token = token->next) { - sec = rxrpc_security_lookup(token->security_index); - if (sec) + if (token->security_index == conn->security->security_index) goto found; } return -EKEYREJECTED; found: - conn->security = sec; - - ret = conn->security->init_connection_security(conn, token); - if (ret < 0) { - conn->security = &rxrpc_no_security; - return ret; + mutex_lock(&conn->security_lock); + if (conn->state == RXRPC_CONN_CLIENT_UNSECURED) { + ret = conn->security->init_connection_security(conn, token); + if (ret == 0) { + spin_lock(&conn->state_lock); + if (conn->state == RXRPC_CONN_CLIENT_UNSECURED) + smp_store_release(&conn->state, + RXRPC_CONN_CLIENT); + spin_unlock(&conn->state_lock); + } } - - _leave(" = 0"); - return 0; + mutex_unlock(&conn->security_lock); + return ret; } /* diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index b0182de63226..7c24e681b9bf 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -37,6 +37,60 @@ bool rxrpc_propose_abort(struct rxrpc_call *call, s32 abort_code, int error, return false; } +/* + * Wait for a call to become connected. Interruption here doesn't cause the + * call to be aborted. + */ +static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo) +{ + DECLARE_WAITQUEUE(myself, current); + int ret = 0; + + _enter("%d", call->debug_id); + + if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) + return call->error; + + add_wait_queue_exclusive(&call->waitq, &myself); + + for (;;) { + ret = call->error; + if (ret < 0) + break; + + switch (call->interruptibility) { + case RXRPC_INTERRUPTIBLE: + case RXRPC_PREINTERRUPTIBLE: + set_current_state(TASK_INTERRUPTIBLE); + break; + case RXRPC_UNINTERRUPTIBLE: + default: + set_current_state(TASK_UNINTERRUPTIBLE); + break; + } + if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) { + ret = call->error; + break; + } + if ((call->interruptibility == RXRPC_INTERRUPTIBLE || + call->interruptibility == RXRPC_PREINTERRUPTIBLE) && + signal_pending(current)) { + ret = sock_intr_errno(*timeo); + break; + } + *timeo = schedule_timeout(*timeo); + } + + remove_wait_queue(&call->waitq, &myself); + __set_current_state(TASK_RUNNING); + + if (ret == 0 && rxrpc_call_is_complete(call)) + ret = call->error; + + _leave(" = %d", ret); + return ret; +} + /* * Return true if there's sufficient Tx queue space. */ @@ -238,6 +292,16 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); + ret = rxrpc_wait_to_be_connected(call, &timeo); + if (ret < 0) + return ret; + + if (call->conn->state == RXRPC_CONN_CLIENT_UNSECURED) { + ret = rxrpc_init_client_conn_security(call->conn); + if (ret < 0) + return ret; + } + /* this should be in poll */ sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); From patchwork Tue Dec 6 16:03:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 30424 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2910027wrr; Tue, 6 Dec 2022 08:10:47 -0800 (PST) X-Google-Smtp-Source: AA0mqf4Xdn4+ejT/mvqOzHiIK+79lPNLoDhZHTjnDmeH1Ar+r/DQDAiN38qZjUyBdZF5m9OsLmRF X-Received: by 2002:a17:903:4d4:b0:189:b9db:deaf with SMTP id jm20-20020a17090304d400b00189b9dbdeafmr22119740plb.133.1670343046861; Tue, 06 Dec 2022 08:10:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670343046; cv=none; d=google.com; s=arc-20160816; b=OFwqFuXZNXd2xKpP+rkFte9uHwxG4vghdr8Uvvx41stYrSN6hQi4q8ps/ssAhPWUTE aBjOBHy/KiV1iwKwrozpQx9kIzSw8Ay4cP50er2W8pq2sCisDjXxDW1So+8hhT7p1R5j HqxxZ93FsDcVqxyc5Gw21bbCudAUMav0agVZ8m8yMsOPwEWZSRprE5ls9JK0RIxMmAwK WlRMZEbtZHdMHCtGEnDWv1T77XKEW/4JnqlG7ZFRGB9Yso3kmL3pRJsFvX3tI+03k96d IFI6S7ByEZbv8dgXD7TemWjyJ8UueC2mDmvi9cdp+jLymq0sGseJMGix0UhtAM8PDYHA 9dAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:organization:dkim-signature; bh=if43UGApWWJdqTN/53+zoOb2JfWQ93VyHHfMWcbQemE=; b=OmzL1bk/nqUyI+AbOdhEgRTMYYxYkf7xw7280aAJuq3gfzPyJoy+Ymkh0SnJpGrdDl /BA8ye41StequRec0tqD/T44NslzZjcDL++37pH1QNXq4CDAsMa/UlhY2TjoCh9UBRrD r5nIvhIH7dui9qsIuCx2c7U4MyzFe7r8aLAAw4ijELWgK/Y8A4MjqoSyZ/3Grx1X9fyX KNQf+DxmmZXVf0REGYfe0Y7K71lr83P/KJkBsh3+o+BZGO49kcm92UDrp0wjazrMKftP hsIqzL1t7+ljEmpQS1wMXAyHY+LXas6jA9p9pwuysBk+0h7ZmTEuxZZhZ1OCBPZ4pxna 9sGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TK7ibaAC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h8-20020a170902704800b001865f8637a8si16478516plt.334.2022.12.06.08.10.27; Tue, 06 Dec 2022 08:10:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TK7ibaAC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235500AbiLFQJT (ORCPT + 99 others); Tue, 6 Dec 2022 11:09:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231669AbiLFQI0 (ORCPT ); Tue, 6 Dec 2022 11:08:26 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 338E336C4B for ; Tue, 6 Dec 2022 08:03:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670342592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=if43UGApWWJdqTN/53+zoOb2JfWQ93VyHHfMWcbQemE=; b=TK7ibaACa2bpQnMYgkHthWRhHjRQ4ozzksMh1u+vlwoouxSZ/T8UFik4oMpaQmbNrCKfER PWteLDyiVRLBJpJ5au2SzdQq4hqdsfaWhzSWgu0K+hfY15yz89rVRv2t1E83ZVUHM6XwPo Ua3USm0l5nYvPXFRzY2gVhV/mrAGiMI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-307-DRMKhfe8PJ2MOlVLvAu9cQ-1; Tue, 06 Dec 2022 11:03:08 -0500 X-MC-Unique: DRMKhfe8PJ2MOlVLvAu9cQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 42205857FAC; Tue, 6 Dec 2022 16:03:08 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.17]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9BE651410DD9; Tue, 6 Dec 2022 16:03:07 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH net-next 32/32] rxrpc: Kill service bundle From: David Howells To: netdev@vger.kernel.org Cc: Marc Dionne , linux-afs@lists.infradead.org, dhowells@redhat.com, linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Date: Tue, 06 Dec 2022 16:03:05 +0000 Message-ID: <167034258505.1105287.5403397572562689183.stgit@warthog.procyon.org.uk> In-Reply-To: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> References: <167034231605.1105287.1693064952174322878.stgit@warthog.procyon.org.uk> User-Agent: StGit/1.5 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751481630980799737?= X-GMAIL-MSGID: =?utf-8?q?1751481630980799737?= Now that the bundle->channel_lock has been eliminated, we don't need the dummy service bundle anymore. It's purpose was purely to provide the channel_lock for service connections. Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org --- net/rxrpc/conn_service.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c index f30323de82bd..89ac05a711a4 100644 --- a/net/rxrpc/conn_service.c +++ b/net/rxrpc/conn_service.c @@ -8,11 +8,6 @@ #include #include "ar-internal.h" -static struct rxrpc_bundle rxrpc_service_dummy_bundle = { - .ref = REFCOUNT_INIT(1), - .debug_id = UINT_MAX, -}; - /* * Find a service connection under RCU conditions. * @@ -132,8 +127,6 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn */ conn->state = RXRPC_CONN_SERVICE_PREALLOC; refcount_set(&conn->ref, 2); - conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle, - rxrpc_bundle_get_service_conn); atomic_inc(&rxnet->nr_conns); write_lock(&rxnet->conn_lock);