Message ID | 20230327083646.18690-2-jgross@suse.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1369661vqo; Mon, 27 Mar 2023 02:07:06 -0700 (PDT) X-Google-Smtp-Source: AKy350asuIyOVR7reU6K2oOfA9FYMZVqVJkfRw0XE4H7ZE3+XWyZuKzZnG5WVt8xA1ZmmCsIr0uE X-Received: by 2002:aa7:d291:0:b0:4fb:59bb:ce7c with SMTP id w17-20020aa7d291000000b004fb59bbce7cmr10980905edq.32.1679908025999; Mon, 27 Mar 2023 02:07:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679908025; cv=none; d=google.com; s=arc-20160816; b=tcdxfKV1GEfb+c5ODqnXkcB96Ki9fjU1HBhJI5m0QsbsuGKu9gHMMrF0XReb/Rhy6y PV6muoEfJJqgFXHeAjAsXEfA030fNJB2z/1ky/3c4v43yj5xMF+Xpi3JFS3p9miy26Zg dVxHUbBtKWEXyvs8nIv5s78eLO5DyErTx2N1uyHokOMQd5Gsg1gpMxDzs0X+5vZ5bXU1 dCp1uG7YZ0m+24EEBsfx2cC3osDClKLJIvfy7kHG0A7NYMlv+utUiwDIyHaD5SrHbMnw hc8bDGhE4UGvtbQLSLBOO/d9GU/ZHJqoNVY/bB1z0fmpy+kA7s9Ly+4SyRdF098+9tC8 DSZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fnetXybi04CNdZ9/xLVwXiguyC8aMRuM30o5vuH2Yac=; b=GAKh6HIeLSX/nAqUagvsq6tX1FnqQ3W0z/iWfdxBTzsFKwuPv0igB8yyIbZJ0oLwLN InG7fnNM0ULp0478xKf0PfK5b94I567C1uJquJYezFxviv1LBYU4GrTxEvcatb1sZD1S /Tj2EG7VEJzyRpL8FHwmoMRO2TR7UwM6dP3KyJB72VVF/qE3rE1itooNcKRTQpQzYoY/ jiSkDANBgHiO38Vde5WT8y4vy+dFr9PgfLDqLjm4c2/NfmKNHqVIk/GylSw42mWpp4zK vY8V9FFL3N+wbEwjHQLk0lG4bgizXh6VMz+vTYjXYpyOJ3soEmBnMEUAEGE8JbFK+/tA cZHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=GUN+jirZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f9-20020a056402068900b0050230ec7e00si5096470edy.377.2023.03.27.02.06.43; Mon, 27 Mar 2023 02:07:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=GUN+jirZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233006AbjC0Ik6 (ORCPT <rfc822;makky5685@gmail.com> + 99 others); Mon, 27 Mar 2023 04:40:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232377AbjC0Ikd (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 27 Mar 2023 04:40:33 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41ABF2D67; Mon, 27 Mar 2023 01:36:56 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id CC0201F8A8; Mon, 27 Mar 2023 08:36:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1679906214; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fnetXybi04CNdZ9/xLVwXiguyC8aMRuM30o5vuH2Yac=; b=GUN+jirZQ5RXcvQG0fEWtMYCkkdmRxzV1X9DFZMTb8Oo7vos8dNhs8farNpDq9/xQWGWMG hQdb6B5nRuZ1zHhXynYWLABX2fQEzMVrt55V226/auqxbx4NrE8xCQi09AO4QOLmyQ4W65 2EIXlCH0Ke5wskB6xPqHcftEoQWoXMo= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7DF8B13329; Mon, 27 Mar 2023 08:36:54 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id I5EqHaZVIWStSAAAMHmgww (envelope-from <jgross@suse.com>); Mon, 27 Mar 2023 08:36:54 +0000 From: Juergen Gross <jgross@suse.com> To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>, Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>, xen-devel@lists.xenproject.org, stable@vger.kernel.org Subject: [PATCH 1/2] xen/netback: don't do grant copy across page boundary Date: Mon, 27 Mar 2023 10:36:45 +0200 Message-Id: <20230327083646.18690-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230327083646.18690-1-jgross@suse.com> References: <20230327083646.18690-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761511238146645796?= X-GMAIL-MSGID: =?utf-8?q?1761511238146645796?= |
Series |
xen/netback: fix issue introduced recently
|
|
Commit Message
Juergen Gross
March 27, 2023, 8:36 a.m. UTC
Fix xenvif_get_requests() not to do grant copy operations across local
page boundaries. This requires to double the maximum number of copy
operations per queue, as each copy could now be split into 2.
Make sure that struct xenvif_tx_cb doesn't grow too large.
Cc: stable@vger.kernel.org
Fixes: ad7f402ae4f4 ("xen/netback: Ensure protocol headers don't fall in the non-linear area")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
drivers/net/xen-netback/common.h | 2 +-
drivers/net/xen-netback/netback.c | 25 +++++++++++++++++++++++--
2 files changed, 24 insertions(+), 3 deletions(-)
Comments
On 27/03/2023 09:36, Juergen Gross wrote: > Fix xenvif_get_requests() not to do grant copy operations across local > page boundaries. This requires to double the maximum number of copy > operations per queue, as each copy could now be split into 2. > > Make sure that struct xenvif_tx_cb doesn't grow too large. > > Cc: stable@vger.kernel.org > Fixes: ad7f402ae4f4 ("xen/netback: Ensure protocol headers don't fall in the non-linear area") > Signed-off-by: Juergen Gross <jgross@suse.com> > --- > drivers/net/xen-netback/common.h | 2 +- > drivers/net/xen-netback/netback.c | 25 +++++++++++++++++++++++-- > 2 files changed, 24 insertions(+), 3 deletions(-) > Reviewed-by: Paul Durrant <paul@xen.org>
On 27.03.2023 10:36, Juergen Gross wrote: > Fix xenvif_get_requests() not to do grant copy operations across local > page boundaries. This requires to double the maximum number of copy > operations per queue, as each copy could now be split into 2. > > Make sure that struct xenvif_tx_cb doesn't grow too large. > > Cc: stable@vger.kernel.org > Fixes: ad7f402ae4f4 ("xen/netback: Ensure protocol headers don't fall in the non-linear area") > Signed-off-by: Juergen Gross <jgross@suse.com> > --- > drivers/net/xen-netback/common.h | 2 +- > drivers/net/xen-netback/netback.c | 25 +++++++++++++++++++++++-- > 2 files changed, 24 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h > index 3dbfc8a6924e..1fcbd83f7ff2 100644 > --- a/drivers/net/xen-netback/common.h > +++ b/drivers/net/xen-netback/common.h > @@ -166,7 +166,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ > struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; > grant_handle_t grant_tx_handle[MAX_PENDING_REQS]; > > - struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; > + struct gnttab_copy tx_copy_ops[2 * MAX_PENDING_REQS]; > struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS]; > struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS]; > /* passed to gnttab_[un]map_refs with pages under (un)mapping */ > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > index 1b42676ca141..111c179f161b 100644 > --- a/drivers/net/xen-netback/netback.c > +++ b/drivers/net/xen-netback/netback.c > @@ -334,6 +334,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue, > struct xenvif_tx_cb { > u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1]; > u8 copy_count; > + u32 split_mask; > }; > > #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb) > @@ -361,6 +362,8 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size) > struct sk_buff *skb = > alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN, > GFP_ATOMIC | __GFP_NOWARN); > + > + BUILD_BUG_ON(sizeof(*XENVIF_TX_CB(skb)) > sizeof(skb->cb)); > if (unlikely(skb == NULL)) > return NULL; > > @@ -396,11 +399,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, > nr_slots = shinfo->nr_frags + 1; > > copy_count(skb) = 0; > + XENVIF_TX_CB(skb)->split_mask = 0; > > /* Create copy ops for exactly data_len bytes into the skb head. */ > __skb_put(skb, data_len); > while (data_len > 0) { > int amount = data_len > txp->size ? txp->size : data_len; > + bool split = false; > > cop->source.u.ref = txp->gref; > cop->source.domid = queue->vif->domid; > @@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, > cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) > - data_len); > > + /* Don't cross local page boundary! */ > + if (cop->dest.offset + amount > XEN_PAGE_SIZE) { > + amount = XEN_PAGE_SIZE - cop->dest.offset; > + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb); Maybe worthwhile to add a BUILD_BUG_ON() somewhere to make sure this shift won't grow too large a shift count. The number of slots accepted could conceivably be grown past XEN_NETBK_LEGACY_SLOTS_MAX (i.e. XEN_NETIF_NR_SLOTS_MIN) at some point. > + split = true; > + } > + > cop->len = amount; > cop->flags = GNTCOPY_source_gref; > > @@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, > pending_idx = queue->pending_ring[index]; > callback_param(queue, pending_idx).ctx = NULL; > copy_pending_idx(skb, copy_count(skb)) = pending_idx; > - copy_count(skb)++; > + if (!split) > + copy_count(skb)++; > > cop++; > data_len -= amount; > @@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, > nr_slots--; > } else { > /* The copy op partially covered the tx_request. > - * The remainder will be mapped. > + * The remainder will be mapped or copied in the next > + * iteration. > */ > txp->offset += amount; > txp->size -= amount; > @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, > pending_idx = copy_pending_idx(skb, i); > > newerr = (*gopp_copy)->status; > + > + /* Split copies need to be handled together. */ > + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) { > + (*gopp_copy)++; > + if (!newerr) > + newerr = (*gopp_copy)->status; > + } It isn't guaranteed that a slot may be split only once, is it? Assuming a near-64k packet with all tiny non-primary slots, that'll cause those tiny slots to all be mapped, but due to if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size) data_len = txreq.size; will, afaict, cause a lot of copying for the primary slot. Therefore I think you need a loop here, not just an if(). Plus tx_copy_ops[]'es dimension also looks to need further growing to accommodate this. Or maybe not - at least the extreme example given would still be fine; more generally packets being limited to below 64k means 2*16 slots would suffice at one end of the scale, while 2*MAX_PENDING_REQS would at the other end (all tiny, including the primary slot). What I haven't fully convinced myself of is whether there might be cases in the middle which are yet worse. As I've been struggling with the code fragment quoted above already in the patch originally introducing it, I'd like to see that relaxed. Can't we avoid excessive copying by suitably growing tx_map_ops[] and then deleting that bumping of data_len? Then there also wouldn't be the risk multiple splits per copy anymore. Alternatively to all of the above: Am I overlooking a check somewhere which would also constrain the primary slot (or more precisely its residual) to within a single page (along the lines of the check for non-primary slots in xenvif_count_requests())? Jan
On 27.03.23 11:49, Jan Beulich wrote: > On 27.03.2023 10:36, Juergen Gross wrote: >> Fix xenvif_get_requests() not to do grant copy operations across local >> page boundaries. This requires to double the maximum number of copy >> operations per queue, as each copy could now be split into 2. >> >> Make sure that struct xenvif_tx_cb doesn't grow too large. >> >> Cc: stable@vger.kernel.org >> Fixes: ad7f402ae4f4 ("xen/netback: Ensure protocol headers don't fall in the non-linear area") >> Signed-off-by: Juergen Gross <jgross@suse.com> >> --- >> drivers/net/xen-netback/common.h | 2 +- >> drivers/net/xen-netback/netback.c | 25 +++++++++++++++++++++++-- >> 2 files changed, 24 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h >> index 3dbfc8a6924e..1fcbd83f7ff2 100644 >> --- a/drivers/net/xen-netback/common.h >> +++ b/drivers/net/xen-netback/common.h >> @@ -166,7 +166,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ >> struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; >> grant_handle_t grant_tx_handle[MAX_PENDING_REQS]; >> >> - struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; >> + struct gnttab_copy tx_copy_ops[2 * MAX_PENDING_REQS]; >> struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS]; >> struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS]; >> /* passed to gnttab_[un]map_refs with pages under (un)mapping */ >> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c >> index 1b42676ca141..111c179f161b 100644 >> --- a/drivers/net/xen-netback/netback.c >> +++ b/drivers/net/xen-netback/netback.c >> @@ -334,6 +334,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue, >> struct xenvif_tx_cb { >> u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1]; >> u8 copy_count; >> + u32 split_mask; >> }; >> >> #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb) >> @@ -361,6 +362,8 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size) >> struct sk_buff *skb = >> alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN, >> GFP_ATOMIC | __GFP_NOWARN); >> + >> + BUILD_BUG_ON(sizeof(*XENVIF_TX_CB(skb)) > sizeof(skb->cb)); >> if (unlikely(skb == NULL)) >> return NULL; >> >> @@ -396,11 +399,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >> nr_slots = shinfo->nr_frags + 1; >> >> copy_count(skb) = 0; >> + XENVIF_TX_CB(skb)->split_mask = 0; >> >> /* Create copy ops for exactly data_len bytes into the skb head. */ >> __skb_put(skb, data_len); >> while (data_len > 0) { >> int amount = data_len > txp->size ? txp->size : data_len; >> + bool split = false; >> >> cop->source.u.ref = txp->gref; >> cop->source.domid = queue->vif->domid; >> @@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >> cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) >> - data_len); >> >> + /* Don't cross local page boundary! */ >> + if (cop->dest.offset + amount > XEN_PAGE_SIZE) { >> + amount = XEN_PAGE_SIZE - cop->dest.offset; >> + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb); > > Maybe worthwhile to add a BUILD_BUG_ON() somewhere to make sure this > shift won't grow too large a shift count. The number of slots accepted > could conceivably be grown past XEN_NETBK_LEGACY_SLOTS_MAX (i.e. > XEN_NETIF_NR_SLOTS_MIN) at some point. This is basically impossible due to the size restriction of struct xenvif_tx_cb. > >> + split = true; >> + } >> + >> cop->len = amount; >> cop->flags = GNTCOPY_source_gref; >> >> @@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >> pending_idx = queue->pending_ring[index]; >> callback_param(queue, pending_idx).ctx = NULL; >> copy_pending_idx(skb, copy_count(skb)) = pending_idx; >> - copy_count(skb)++; >> + if (!split) >> + copy_count(skb)++; >> >> cop++; >> data_len -= amount; >> @@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >> nr_slots--; >> } else { >> /* The copy op partially covered the tx_request. >> - * The remainder will be mapped. >> + * The remainder will be mapped or copied in the next >> + * iteration. >> */ >> txp->offset += amount; >> txp->size -= amount; >> @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, >> pending_idx = copy_pending_idx(skb, i); >> >> newerr = (*gopp_copy)->status; >> + >> + /* Split copies need to be handled together. */ >> + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) { >> + (*gopp_copy)++; >> + if (!newerr) >> + newerr = (*gopp_copy)->status; >> + } > > It isn't guaranteed that a slot may be split only once, is it? Assuming a I think it is guaranteed. No slot can cover more than XEN_PAGE_SIZE bytes due to the grants being restricted to that size. There is no way how such a data packet could cross 2 page boundaries. In the end the problem isn't the copies for the linear area not crossing multiple page boundaries, but the copies for a single request slot not doing so. And this can't happen IMO. > near-64k packet with all tiny non-primary slots, that'll cause those tiny > slots to all be mapped, but due to > > if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size) > data_len = txreq.size; > > will, afaict, cause a lot of copying for the primary slot. Therefore I > think you need a loop here, not just an if(). Plus tx_copy_ops[]'es > dimension also looks to need further growing to accommodate this. Or > maybe not - at least the extreme example given would still be fine; more > generally packets being limited to below 64k means 2*16 slots would > suffice at one end of the scale, while 2*MAX_PENDING_REQS would at the > other end (all tiny, including the primary slot). What I haven't fully > convinced myself of is whether there might be cases in the middle which > are yet worse. See above reasoning. I think it is okay, but maybe I'm missing something. Juergen
On 27.03.2023 12:07, Juergen Gross wrote: > On 27.03.23 11:49, Jan Beulich wrote: >> On 27.03.2023 10:36, Juergen Gross wrote: >>> @@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >>> cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) >>> - data_len); >>> >>> + /* Don't cross local page boundary! */ >>> + if (cop->dest.offset + amount > XEN_PAGE_SIZE) { >>> + amount = XEN_PAGE_SIZE - cop->dest.offset; >>> + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb); >> >> Maybe worthwhile to add a BUILD_BUG_ON() somewhere to make sure this >> shift won't grow too large a shift count. The number of slots accepted >> could conceivably be grown past XEN_NETBK_LEGACY_SLOTS_MAX (i.e. >> XEN_NETIF_NR_SLOTS_MIN) at some point. > > This is basically impossible due to the size restriction of struct > xenvif_tx_cb. If its size became a problem, it might simply take a level of indirection to overcome the limitation. >>> @@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >>> pending_idx = queue->pending_ring[index]; >>> callback_param(queue, pending_idx).ctx = NULL; >>> copy_pending_idx(skb, copy_count(skb)) = pending_idx; >>> - copy_count(skb)++; >>> + if (!split) >>> + copy_count(skb)++; >>> >>> cop++; >>> data_len -= amount; >>> @@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >>> nr_slots--; >>> } else { >>> /* The copy op partially covered the tx_request. >>> - * The remainder will be mapped. >>> + * The remainder will be mapped or copied in the next >>> + * iteration. >>> */ >>> txp->offset += amount; >>> txp->size -= amount; >>> @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, >>> pending_idx = copy_pending_idx(skb, i); >>> >>> newerr = (*gopp_copy)->status; >>> + >>> + /* Split copies need to be handled together. */ >>> + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) { >>> + (*gopp_copy)++; >>> + if (!newerr) >>> + newerr = (*gopp_copy)->status; >>> + } >> >> It isn't guaranteed that a slot may be split only once, is it? Assuming a > > I think it is guaranteed. > > No slot can cover more than XEN_PAGE_SIZE bytes due to the grants being > restricted to that size. There is no way how such a data packet could cross > 2 page boundaries. > > In the end the problem isn't the copies for the linear area not crossing > multiple page boundaries, but the copies for a single request slot not > doing so. And this can't happen IMO. You're thinking of only well-formed requests. What about said request providing a large size with only tiny fragments? xenvif_get_requests() will happily process such, creating bogus grant-copy ops. But them failing once submitted to Xen will be only after damage may already have occurred (from bogus updates of internal state; the logic altogether is too involved for me to be convinced that nothing bad can happen). Interestingly (as I realize now) the shifts you add are not be at risk of turning UB in this case, as the shift count won't go beyond 16. >> near-64k packet with all tiny non-primary slots, that'll cause those tiny >> slots to all be mapped, but due to >> >> if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size) >> data_len = txreq.size; >> >> will, afaict, cause a lot of copying for the primary slot. Therefore I >> think you need a loop here, not just an if(). Plus tx_copy_ops[]'es >> dimension also looks to need further growing to accommodate this. Or >> maybe not - at least the extreme example given would still be fine; more >> generally packets being limited to below 64k means 2*16 slots would >> suffice at one end of the scale, while 2*MAX_PENDING_REQS would at the >> other end (all tiny, including the primary slot). What I haven't fully >> convinced myself of is whether there might be cases in the middle which >> are yet worse. > > See above reasoning. I think it is okay, but maybe I'm missing something. Well, the main thing I'm missing is a "primary request fits in a page" check, even more so with the new copying logic that the commit referenced by Fixes: introduced into xenvif_get_requests(). Jan
On 27.03.23 17:38, Jan Beulich wrote: > On 27.03.2023 12:07, Juergen Gross wrote: >> On 27.03.23 11:49, Jan Beulich wrote: >>> On 27.03.2023 10:36, Juergen Gross wrote: >>>> @@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >>>> cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) >>>> - data_len); >>>> >>>> + /* Don't cross local page boundary! */ >>>> + if (cop->dest.offset + amount > XEN_PAGE_SIZE) { >>>> + amount = XEN_PAGE_SIZE - cop->dest.offset; >>>> + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb); >>> >>> Maybe worthwhile to add a BUILD_BUG_ON() somewhere to make sure this >>> shift won't grow too large a shift count. The number of slots accepted >>> could conceivably be grown past XEN_NETBK_LEGACY_SLOTS_MAX (i.e. >>> XEN_NETIF_NR_SLOTS_MIN) at some point. >> >> This is basically impossible due to the size restriction of struct >> xenvif_tx_cb. > > If its size became a problem, it might simply take a level of indirection > to overcome the limitation. Maybe. OTOH this would require some rework, which should take such problems into consideration. In the end I'd be fine to add such a BUILD_BUG_ON(), as the code is complicated enough already. > >>>> @@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >>>> pending_idx = queue->pending_ring[index]; >>>> callback_param(queue, pending_idx).ctx = NULL; >>>> copy_pending_idx(skb, copy_count(skb)) = pending_idx; >>>> - copy_count(skb)++; >>>> + if (!split) >>>> + copy_count(skb)++; >>>> >>>> cop++; >>>> data_len -= amount; >>>> @@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, >>>> nr_slots--; >>>> } else { >>>> /* The copy op partially covered the tx_request. >>>> - * The remainder will be mapped. >>>> + * The remainder will be mapped or copied in the next >>>> + * iteration. >>>> */ >>>> txp->offset += amount; >>>> txp->size -= amount; >>>> @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, >>>> pending_idx = copy_pending_idx(skb, i); >>>> >>>> newerr = (*gopp_copy)->status; >>>> + >>>> + /* Split copies need to be handled together. */ >>>> + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) { >>>> + (*gopp_copy)++; >>>> + if (!newerr) >>>> + newerr = (*gopp_copy)->status; >>>> + } >>> >>> It isn't guaranteed that a slot may be split only once, is it? Assuming a >> >> I think it is guaranteed. >> >> No slot can cover more than XEN_PAGE_SIZE bytes due to the grants being >> restricted to that size. There is no way how such a data packet could cross >> 2 page boundaries. >> >> In the end the problem isn't the copies for the linear area not crossing >> multiple page boundaries, but the copies for a single request slot not >> doing so. And this can't happen IMO. > > You're thinking of only well-formed requests. What about said request > providing a large size with only tiny fragments? xenvif_get_requests() > will happily process such, creating bogus grant-copy ops. But them failing > once submitted to Xen will be only after damage may already have occurred > (from bogus updates of internal state; the logic altogether is too > involved for me to be convinced that nothing bad can happen). There are sanity checks after each relevant RING_COPY_REQUEST() call, which will bail out if "(txp->offset + txp->size) > XEN_PAGE_SIZE" (the first one is after the call of xenvif_count_requests(), as this call will decrease the size of the request, the other check is in xenvif_count_requests()). > Interestingly (as I realize now) the shifts you add are not be at risk of > turning UB in this case, as the shift count won't go beyond 16. > >>> near-64k packet with all tiny non-primary slots, that'll cause those tiny >>> slots to all be mapped, but due to >>> >>> if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size) >>> data_len = txreq.size; >>> >>> will, afaict, cause a lot of copying for the primary slot. Therefore I >>> think you need a loop here, not just an if(). Plus tx_copy_ops[]'es >>> dimension also looks to need further growing to accommodate this. Or >>> maybe not - at least the extreme example given would still be fine; more >>> generally packets being limited to below 64k means 2*16 slots would >>> suffice at one end of the scale, while 2*MAX_PENDING_REQS would at the >>> other end (all tiny, including the primary slot). What I haven't fully >>> convinced myself of is whether there might be cases in the middle which >>> are yet worse. >> >> See above reasoning. I think it is okay, but maybe I'm missing something. > > Well, the main thing I'm missing is a "primary request fits in a page" > check, even more so with the new copying logic that the commit referenced > by Fixes: introduced into xenvif_get_requests(). When xenvif_get_requests() gets called, all requests are sanity checked already (note that xenvif_get_requests() is working on the local copies of the requests). Juergen
On 27.03.2023 18:22, Juergen Gross wrote: > On 27.03.23 17:38, Jan Beulich wrote: >> On 27.03.2023 12:07, Juergen Gross wrote: >>> On 27.03.23 11:49, Jan Beulich wrote: >>>> On 27.03.2023 10:36, Juergen Gross wrote: >>>>> @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, >>>>> pending_idx = copy_pending_idx(skb, i); >>>>> >>>>> newerr = (*gopp_copy)->status; >>>>> + >>>>> + /* Split copies need to be handled together. */ >>>>> + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) { >>>>> + (*gopp_copy)++; >>>>> + if (!newerr) >>>>> + newerr = (*gopp_copy)->status; >>>>> + } >>>> >>>> It isn't guaranteed that a slot may be split only once, is it? Assuming a >>> >>> I think it is guaranteed. >>> >>> No slot can cover more than XEN_PAGE_SIZE bytes due to the grants being >>> restricted to that size. There is no way how such a data packet could cross >>> 2 page boundaries. >>> >>> In the end the problem isn't the copies for the linear area not crossing >>> multiple page boundaries, but the copies for a single request slot not >>> doing so. And this can't happen IMO. >> >> You're thinking of only well-formed requests. What about said request >> providing a large size with only tiny fragments? xenvif_get_requests() >> will happily process such, creating bogus grant-copy ops. But them failing >> once submitted to Xen will be only after damage may already have occurred >> (from bogus updates of internal state; the logic altogether is too >> involved for me to be convinced that nothing bad can happen). > > There are sanity checks after each relevant RING_COPY_REQUEST() call, which > will bail out if "(txp->offset + txp->size) > XEN_PAGE_SIZE" (the first one > is after the call of xenvif_count_requests(), as this call will decrease the > size of the request, the other check is in xenvif_count_requests()). Oh, indeed - that's the check I've been overlooking. (The messages logged there could do with also mentioning "Cross page boundary", like the one in xenvif_count_requests() does.) Jan
diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 3dbfc8a6924e..1fcbd83f7ff2 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -166,7 +166,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; grant_handle_t grant_tx_handle[MAX_PENDING_REQS]; - struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; + struct gnttab_copy tx_copy_ops[2 * MAX_PENDING_REQS]; struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS]; struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS]; /* passed to gnttab_[un]map_refs with pages under (un)mapping */ diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 1b42676ca141..111c179f161b 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -334,6 +334,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue, struct xenvif_tx_cb { u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1]; u8 copy_count; + u32 split_mask; }; #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb) @@ -361,6 +362,8 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size) struct sk_buff *skb = alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN, GFP_ATOMIC | __GFP_NOWARN); + + BUILD_BUG_ON(sizeof(*XENVIF_TX_CB(skb)) > sizeof(skb->cb)); if (unlikely(skb == NULL)) return NULL; @@ -396,11 +399,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, nr_slots = shinfo->nr_frags + 1; copy_count(skb) = 0; + XENVIF_TX_CB(skb)->split_mask = 0; /* Create copy ops for exactly data_len bytes into the skb head. */ __skb_put(skb, data_len); while (data_len > 0) { int amount = data_len > txp->size ? txp->size : data_len; + bool split = false; cop->source.u.ref = txp->gref; cop->source.domid = queue->vif->domid; @@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue, cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) - data_len); + /* Don't cross local page boundary! */ + if (cop->dest.offset + amount > XEN_PAGE_SIZE) { + amount = XEN_PAGE_SIZE - cop->dest.offset; + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb); + split = true; + } + cop->len = amount; cop->flags = GNTCOPY_source_gref; @@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, pending_idx = queue->pending_ring[index]; callback_param(queue, pending_idx).ctx = NULL; copy_pending_idx(skb, copy_count(skb)) = pending_idx; - copy_count(skb)++; + if (!split) + copy_count(skb)++; cop++; data_len -= amount; @@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue, nr_slots--; } else { /* The copy op partially covered the tx_request. - * The remainder will be mapped. + * The remainder will be mapped or copied in the next + * iteration. */ txp->offset += amount; txp->size -= amount; @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue, pending_idx = copy_pending_idx(skb, i); newerr = (*gopp_copy)->status; + + /* Split copies need to be handled together. */ + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) { + (*gopp_copy)++; + if (!newerr) + newerr = (*gopp_copy)->status; + } if (likely(!newerr)) { /* The first frag might still have this slot mapped */ if (i < copy_count(skb) - 1 || !sharedslot)