From patchwork Fri Dec 15 17:07:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 179443 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp9439099dys; Fri, 15 Dec 2023 09:20:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IE6aq5gVJ0vKRW0rRUu8Gs+pALyO98xtAciMWayXarArf4GM1KVuRFauTp3xDM5loOn8AHb X-Received: by 2002:a05:6a21:7882:b0:18f:e485:c86a with SMTP id bf2-20020a056a21788200b0018fe485c86amr7498032pzc.65.1702660809048; Fri, 15 Dec 2023 09:20:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702660809; cv=none; d=google.com; s=arc-20160816; b=pjNaJZqkvE39FJt2bdJfCoLIj8cGoeJFmhmQzj2Lb0+aJkCQHRAyQdRX/5rVB9WPP9 v4Cy1t7Lk7NpAj2jyhiF7/YBQywfSrXii8nyKWh7GLnoCk5uOvdtomXVq4XhLmW2uKbZ M671KNUy6HnLsWEc+rEqGv3wb5JrRslyNSYSmfgU3Q+1Djz2oD4dR7O2gmS3dqieF69G T6lMPzMgJ4ldmIb712Ol/6ZbGP0M7zBuD/VgSKwRqPsJ6d4r57TTt33hOix5pNqfS2iR 1Ph2IzbUUQ5vW/Yi7+trtUnOv6ubXNVRaeFL4nys/edCxzno8vrBflTpaH34Un+cH8Gb dOuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:dkim-signature:dkim-signature:from; bh=wD16ISNyde986dSZRGGV32udEpTGLhg+lI5u4jYKrqw=; fh=hVHSQf4+NkrMKiIytJT4hXzO9PAy8jO3G611dp8yJkA=; b=lgrjXRCGC3Ljziyiz3+25TJZZDyTR/YihuhmbAnjTrLWtzzav4YNvsVm2YC2xEFSFl s/8VZWlnaCxTzuYvZLySzRvEM4VR+49ikaPWCtdQSWSfEC9GQfDEPKjZ5iLW3Noo0ASw o1FFdtqZA++T4UhPu+rrY5teah90xIFSIOR4XAEswLh91Q0UFBeU97YddrOsQgHIZErw hBQAOJs/6Uy86ewJbtLXtqUrtvlUyi+smTESqWZSYCxPYIb/o27B6jFdgy/NRTKfgT5/ AvxyhgomfeN4v/fgzKFyzwTjRifH+60YmIWRsc/NfaeTQb0BD99AocsPbvjC2pJDrmLP SzYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=SRSorbBE; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=bKkDOU1m; spf=pass (google.com: domain of linux-kernel+bounces-1385-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-1385-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id h2-20020a654042000000b005c66cf231d7si13328012pgp.336.2023.12.15.09.20.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Dec 2023 09:20:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-1385-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=SRSorbBE; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=bKkDOU1m; spf=pass (google.com: domain of linux-kernel+bounces-1385-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-1385-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id BA95C286F42 for ; Fri, 15 Dec 2023 17:19:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D4EF66F62B; Fri, 15 Dec 2023 17:10:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="SRSorbBE"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="bKkDOU1m" X-Original-To: linux-kernel@vger.kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F2816AB83; Fri, 15 Dec 2023 17:10:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1702660243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wD16ISNyde986dSZRGGV32udEpTGLhg+lI5u4jYKrqw=; b=SRSorbBEjdXTPPsLCYBR3BfHZUztde0viH5AXb6BnrVRO+dRpLMHGwt39krhGG0dIGg2gc U/GPYQC8QIJryAvWdpneFKRQPe6h9ZTqEWdzui8QpFNXWEeO85TAeVO/cv65RLccd4/YRK WzlvjMKRk5knp71bJzfOBYegir5nHiJNyJqFAqoY8QhtCasD/8/viDwUyU06Aoa/DCR6yt /Bo5pe68KaOdpzj5Ej1RmwGXZF6GFCUwJ4j6BVvHO/0/fuqfNBRqPARUavf+TmVV9muEYn DgRMLZwgTHliPnTc22SgXQvtzytCGCuEBcYzPMfFPZaWDtx9BK4682g+Lzyb7w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1702660243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wD16ISNyde986dSZRGGV32udEpTGLhg+lI5u4jYKrqw=; b=bKkDOU1m4UEsf1ODMNSff2+rozRNCsjhcGgYFGGAkJKSnk1VerHLACuMKMr3ry8mJ6DtKt 6a997HCgOHDcT6Dw== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Alexei Starovoitov , Andrii Nakryiko , Hao Luo , Jesper Dangaard Brouer , Jiri Olsa , John Fastabend , KP Singh , Martin KaFai Lau , Song Liu , Stanislav Fomichev , Yonghong Song , bpf@vger.kernel.org Subject: [PATCH net-next 24/24] net: bpf: Add lockdep assert for the redirect process. Date: Fri, 15 Dec 2023 18:07:43 +0100 Message-ID: <20231215171020.687342-25-bigeasy@linutronix.de> In-Reply-To: <20231215171020.687342-1-bigeasy@linutronix.de> References: <20231215171020.687342-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785369260530728353 X-GMAIL-MSGID: 1785369260530728353 The users of bpf_redirect_info should lock the access by acquiring the nested BH-lock bpf_run_lock.redirect_lock. This lock should be acquired before the first usage (bpf_prog_run_xdp()) and dropped after the last user in the context (xdp_do_redirect()). Current user in tree have been audited and updated. Add lockdep annonation to ensure new user acquire the lock. Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Hao Luo Cc: Jesper Dangaard Brouer Cc: Jiri Olsa Cc: John Fastabend Cc: KP Singh Cc: Martin KaFai Lau Cc: Song Liu Cc: Stanislav Fomichev Cc: Yonghong Song Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- include/net/xdp.h | 1 + net/core/filter.c | 11 +++++++++++ 2 files changed, 12 insertions(+) diff --git a/include/net/xdp.h b/include/net/xdp.h index 349c36fb5fd8f..cdeab175abf18 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -493,6 +493,7 @@ static inline void xdp_clear_features_flag(struct net_device *dev) static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, struct xdp_buff *xdp) { + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); /* Driver XDP hooks are invoked within a single NAPI poll cycle and thus * under local_bh_disable(), which provides the needed RCU protection * for accessing map entries. diff --git a/net/core/filter.c b/net/core/filter.c index 72a7812f933a1..a2f97503ed578 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -2495,6 +2495,7 @@ int skb_do_redirect(struct sk_buff *skb) struct net_device *dev; u32 flags = ri->flags; + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); dev = dev_get_by_index_rcu(net, ri->tgt_index); ri->tgt_index = 0; ri->flags = 0; @@ -2525,6 +2526,8 @@ BPF_CALL_2(bpf_redirect, u32, ifindex, u64, flags) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); + if (unlikely(flags & (~(BPF_F_INGRESS) | BPF_F_REDIRECT_INTERNAL))) return TC_ACT_SHOT; @@ -2546,6 +2549,8 @@ BPF_CALL_2(bpf_redirect_peer, u32, ifindex, u64, flags) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); + if (unlikely(flags)) return TC_ACT_SHOT; @@ -2568,6 +2573,8 @@ BPF_CALL_4(bpf_redirect_neigh, u32, ifindex, struct bpf_redir_neigh *, params, { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); + if (unlikely((plen && plen < sizeof(*params)) || flags)) return TC_ACT_SHOT; @@ -4287,6 +4294,8 @@ u32 xdp_master_redirect(struct xdp_buff *xdp) struct net_device *master, *slave; struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); + master = netdev_master_upper_dev_get_rcu(xdp->rxq->dev); slave = master->netdev_ops->ndo_xdp_get_xmit_slave(master, xdp); if (slave && slave != xdp->rxq->dev) { @@ -4394,6 +4403,7 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); enum bpf_map_type map_type = ri->map_type; + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); if (map_type == BPF_MAP_TYPE_XSKMAP) return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog); @@ -4408,6 +4418,7 @@ int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); enum bpf_map_type map_type = ri->map_type; + lockdep_assert_held(this_cpu_ptr(&bpf_run_lock.redirect_lock)); if (map_type == BPF_MAP_TYPE_XSKMAP) return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog);