From patchwork Wed Dec 20 21:45:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 181787 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp28442dyi; Wed, 20 Dec 2023 13:46:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IH3Id6tX2Nl6VThnCB9U+IHkd+tzsWAJVQDtlet8KGDQm1nGqh1ZDgAZ3QleXSnihf+vy0I X-Received: by 2002:a2e:9f12:0:b0:2cc:5c93:7f8b with SMTP id u18-20020a2e9f12000000b002cc5c937f8bmr3623943ljk.59.1703108776610; Wed, 20 Dec 2023 13:46:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703108776; cv=none; d=google.com; s=arc-20160816; b=GckNDAdGE3kv4BVh+qgLDB6gHZMLv4Nk7WMQMozTsyXH1s7PFwfSqcIxMTkgJJDhCq 8PrPj06HHqJs9RhNF5zbjp5YMhD6qDVipduU3ByFadKCp2WoQcRVGctAA5pV8CsLhnIC +uGbrn1eH3LkZ+c/3560qZF+WIALrSrRwpZMSUqeOLZyOtEj0YLttzfALJOA+7t+h+ra mHj+iY7UEl3Q0Q9Zq//yEcEW8Nja+K1O36pmB8hJ4mU3O52I1JZkl5VxjA+aRSd+l5/a /FEvVthDqX4X4ylEqa0C9QuR1V1Ptbl0LplSCjguJfHQW798gDlC2Pw1VWJMVvGXIsAt L2cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=Dx3B6aLRQGDHWIOJfDNBvVD3RlXkr+3jhLG4738eUjE=; fh=JlPK37Up5EiBiZMpR+3rbUHDZrxBspzZ+EmMm14SzR0=; b=v62l90Wn/Kdc+EMWuBHGhukcj1n2dXiSJFna4EzYYaz4i3YCe6IPui1lSXedTMhqU8 abKxtM2EckkWPg8XVJNLnWzE5k3b/YN31dulrzhmN0ERInb2vcP2Z0wVpIGVzoK7cdqg thGCZmZP0fYLxuPVafCiYttMmv8uMrc7ose58Rvf4WGdxBHYVbAnpmgfUuH6gBdN7Fgn gG+vh1U3ngvKGzHdIEjqCQGLvjh/g4dp6wE+irOAEyMHvu8Gj0poRZtH+hS7CTijXOZB Gi90qTxK4/t49K+WPnK7SLIqnEtFrvUIhlY6Kb6mu7hxujEVDuZVUWCTs/1MK6BDJ9wb /GVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=QQ2nudT2; spf=pass (google.com: domain of linux-kernel+bounces-7477-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7477-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id q26-20020a50c35a000000b00553327c7206si201168edb.104.2023.12.20.13.46.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:46:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-7477-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=QQ2nudT2; spf=pass (google.com: domain of linux-kernel+bounces-7477-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7477-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 397F71F22B7D for ; Wed, 20 Dec 2023 21:46:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B116D4B5B0; Wed, 20 Dec 2023 21:45:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QQ2nudT2" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD34149F66 for ; Wed, 20 Dec 2023 21:45:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5e8e0c7f9a8so1918737b3.0 for ; Wed, 20 Dec 2023 13:45:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703108711; x=1703713511; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Dx3B6aLRQGDHWIOJfDNBvVD3RlXkr+3jhLG4738eUjE=; b=QQ2nudT2J+0/gyYlb4aFR4GB1GA9nYXGLGGlkwgTAFDf2nttTsqQ8jKu8mlH8lGNPW KCTcxPUN8vkn4s7EXCKVEYjnScp89S2zQvvWJdG+Gb4mq1w2DGrsktLgUjJSz01In2hn nbG77gGNJVQ8o7N+ODDBEFcUrDOEnxSeewm1YIW/BDWU3s/oNzeor5fgtLVdyCFTVhvr GYeHhrKhFMCvBy/EyeS376WEoQYFcJmTwenkANWzHdXvG6PPRflXGVocePVfpRrwDM6Y tAAqjYwAmmOeW/MMMc+dVyQG5EJofnsxMMIQEdGRhzhJyh/B9wX+/tJ/S8QusrQr4865 azgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108711; x=1703713511; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Dx3B6aLRQGDHWIOJfDNBvVD3RlXkr+3jhLG4738eUjE=; b=Y8e1TY+zdqwHLLV9eLWeV+EYZdSqkHb5nJGALKgS+0x1h1jTeJVPzX8NTfPXio18jT uLW0Ogib21swr4pj+idQFx287cA0ngg+e69qxzv9etg6EUDTDl17+FmDGdWGzTXIJ+4H v/8puWli36OP3XarSgSRvJuERp0VwPp4zq0TO39etT8fSs+ZCH0/KIpCAhr1lZJS+NMK bDqScGxin07961MtdHumTCRC+AuJk4gSnD+7sYOwcQr3gw1vulybIlg7ssslye+cNAJE Snd2qs2GqDhxV7li985TD1qgxCKGGhrDEemjxL114xxy4NA97u2g4YkZgMenpoB7rnXh O8uA== X-Gm-Message-State: AOJu0YzrZ3JL0L9xmQ8rS6TM16ngqYxvwgzEJ5ljZ1mMmzddHXbvam0r 36MZcw/+O+Nbu+gJzm4fINSaeKQ4FR0VUYRTdWxOgNK14a3fsZPPJEwJijmtuG1cWKsk80oD8Mx j1f7oiEh4NOY/tCBsnJgGC3vQfUqJ8/pfVzt7/mqjSR4LluwDjkyCsZQHZeQBfyIP5hlRmfZPMO 88Mbu4uXs= X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:13cc:a33:a435:3fe9]) (user=almasrymina job=sendgmr) by 2002:a05:690c:c01:b0:5d6:f1d2:2e5e with SMTP id cl1-20020a05690c0c0100b005d6f1d22e5emr170815ywb.0.1703108710831; Wed, 20 Dec 2023 13:45:10 -0800 (PST) Date: Wed, 20 Dec 2023 13:45:00 -0800 In-Reply-To: <20231220214505.2303297-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231220214505.2303297-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231220214505.2303297-2-almasrymina@google.com> Subject: [PATCH net-next v3 1/3] vsock/virtio: use skb_frag_*() helpers From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Stefan Hajnoczi , Stefano Garzarella , David Howells , Jason Gunthorpe , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Yunsheng Lin , Willem de Bruijn X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785838988675996762 X-GMAIL-MSGID: 1785838988675996762 Minor fix for virtio: code wanting to access the fields inside an skb frag should use the skb_frag_*() helpers, instead of accessing the fields directly. This allows for extensions where the underlying memory is not a page. Signed-off-by: Mina Almasry Reviewed-by: Shakeel Butt Acked-by: Stefano Garzarella --- v2: - Also fix skb_frag_off() + skb_frag_size() (David) - Did not apply the reviewed-by from Stefano since the patch changed relatively much. --- net/vmw_vsock/virtio_transport.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index f495b9e5186b..1748268e0694 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -153,10 +153,10 @@ virtio_transport_send_pkt_work(struct work_struct *work) * 'virt_to_phys()' later to fill the buffer descriptor. * We don't touch memory at "virtual" address of this page. */ - va = page_to_virt(skb_frag->bv_page); + va = page_to_virt(skb_frag_page(skb_frag)); sg_init_one(sgs[out_sg], - va + skb_frag->bv_offset, - skb_frag->bv_len); + va + skb_frag_off(skb_frag), + skb_frag_size(skb_frag)); out_sg++; } } From patchwork Wed Dec 20 21:45:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 181788 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp28579dyi; Wed, 20 Dec 2023 13:46:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IHI3fDG/lN9Dh153iIMuI2/WpF24G84qslbdflLVbByRPs7IAIIbyvdydkGp1lmowZi3O2J X-Received: by 2002:a50:d514:0:b0:553:2fce:a5f8 with SMTP id u20-20020a50d514000000b005532fcea5f8mr2327990edi.106.1703108798569; Wed, 20 Dec 2023 13:46:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703108798; cv=none; d=google.com; s=arc-20160816; b=qLNOpYXCMgHbUii4QKoOwOwdbYEGRIO2Hxxw1ZHizbpvk/epyWN8SwpPvR6R1skvrf Vr/zCRiYNQL+Tqfv6aG2nWG+SWgwHv2Zda0tIzbu/+dwe0JXfeXwFtdGi5OzixZ4eYDj IYnW3mEy4qLZscUdUg1tWjH9e/ApS0WNky8YkTMgeLTsgZnU9qi/TVkoviCBtyRXFHJG dRxnmUJUM0RKaLersqGsqLq4knJs702hl3PFyaOd5mrDxluyymxfHgauBtL2xBtq0gM4 3Mesf2b8iPG5CPKTwD489weawde667Z8Q+eGGPZjldVURmZFOgTbFJpIhYpFhl5Higaf izbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=9DyntanP2TiYwqFVgk0wpxjTNS8jI1v6Olu/+yXZqok=; fh=JlPK37Up5EiBiZMpR+3rbUHDZrxBspzZ+EmMm14SzR0=; b=O8MzceAHU33/AaN/A82Qdu5Ju84IUhp6OxempZx5bvQ9uzrrYYz4ZxysRFuiOEZndI Ksa0i9Mj8GkIKo/ug2bnTlWIIsIfAgWZpZ8rW6vc+4NgVuBgs2kRKqo+cf8GXlkwr11s cuLbE3oJkza6VRVtJlquDgA1bclaNMU7285N12OzF3+607RSETzZHo3fHjSB6mxe0VHZ EYc91SopEA8M1k2ZZliBweywIQcorLFOy0FO8o/J7Zg8b3g0qWeY9pdck95W+TrjR2LV dom1bLTxDL0yg04Rf0j4PSFX4CrrIGuNA2AChatbD+ArHyWKgZ18hrAltX0obYXMv0mK GaOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=eOOSYvLB; spf=pass (google.com: domain of linux-kernel+bounces-7478-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7478-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id cx2-20020a05640222a200b00553a31da76fsi202382edb.184.2023.12.20.13.46.38 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:46:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-7478-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=eOOSYvLB; spf=pass (google.com: domain of linux-kernel+bounces-7478-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7478-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 31E151F242E1 for ; Wed, 20 Dec 2023 21:46:38 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D65624BA98; Wed, 20 Dec 2023 21:45:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eOOSYvLB" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A624C4AF79 for ; Wed, 20 Dec 2023 21:45:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5e20c9c4080so3734947b3.3 for ; Wed, 20 Dec 2023 13:45:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703108713; x=1703713513; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9DyntanP2TiYwqFVgk0wpxjTNS8jI1v6Olu/+yXZqok=; b=eOOSYvLBWn2T2H42i1DVzx2QNCS5zbBTluHDw7IYKLT7zZuZ9VE+KFuleyoQQ2KuLo cLqSqjic02BvPpsb6y/lWJHyL5mMWnlisBYGyiSEFASJ9LYu/cawnw8DdbxGTN92ZZf+ AJ2bfCzLkl4Gbmd1yX7ABKNp0D1QFD9/SjhU1CzgrKlDsWb99dUCqxRvvsADnnb553nI orhYUU/KGyzuBLR74NuXfTQXJImADxImGCYep/pt4ElS294/Lv4YpWdMG6s09m5INTZd DcmaCvwX4dSo8HLhxCS4Dbz42FmpsCaJmb1/6IGZdVMpKZzqCpm+tCAh7xI8bBNQXXh7 X4YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108713; x=1703713513; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9DyntanP2TiYwqFVgk0wpxjTNS8jI1v6Olu/+yXZqok=; b=h7XCEqu0sNNPRm/8VAvRNtn+zNPgEytBSmDBJPth86lz0WAVRgg+1QI9wcfTwEbqEy VXix3jn3OWFcNRviWuPJC1qE2UZiSyFE8Qm6DAcSyLwKilwowM0Zx1olAsAh+OEdYXWL lS7uEE3ni46O3L/fioadoA2MG2TIUHIi83c6n2MnEm/yAtabQ3czE+9JN46FR0RM6HTQ YJIwu9qIy82yM/4S2nE5TEwLAP9k0ERRSc+o+djH4KssQA1g+1BbqZfYf+eSPl9w30pq O5wRo45GgPg9/ytS4OBPxWNPwUCnJwM/jk4BVqOeoo+url9CXgj1OCTwqzvp3JBFeDV+ WJdA== X-Gm-Message-State: AOJu0YyIa41wjGKJFXpVrI42gXvX+BgO7N8vA+k46LgaNpXBo6iWnaAG fGuoMeWNKu7GdLAdmt3/aB2qya+vRF2qxIYR/NOuqLEcl4OswgOyUmfeopmSE//Eaixwl95FQBY w3T5FiZb1WUScmEYFsGj+NOK9PbCySHmZ/MLvppqKQoqv6Vjf96rwZc7sP5x2RBnLqRGCRjIwg0 5c0l7mcno= X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:13cc:a33:a435:3fe9]) (user=almasrymina job=sendgmr) by 2002:a25:a28a:0:b0:dbd:b756:983a with SMTP id c10-20020a25a28a000000b00dbdb756983amr151334ybi.9.1703108712675; Wed, 20 Dec 2023 13:45:12 -0800 (PST) Date: Wed, 20 Dec 2023 13:45:01 -0800 In-Reply-To: <20231220214505.2303297-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231220214505.2303297-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231220214505.2303297-3-almasrymina@google.com> Subject: [PATCH net-next v3 2/3] net: introduce abstraction for network memory From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Stefan Hajnoczi , Stefano Garzarella , David Howells , Jason Gunthorpe , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Yunsheng Lin , Willem de Bruijn X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785839011866903119 X-GMAIL-MSGID: 1785839011866903119 Add the netmem_ref type, an abstraction for network memory. To add support for new memory types to the net stack, we must first abstract the current memory type. Currently parts of the net stack use struct page directly: - page_pool - drivers - skb_frag_t Originally the plan was to reuse struct page* for the new memory types, and to set the LSB on the page* to indicate it's not really a page. However, for compiler type checking we need to introduce a new type. netmem_ref is introduced to abstract the underlying memory type. Currently it's a no-op abstraction that is always a struct page underneath. In parallel there is an undergoing effort to add support for devmem to the net stack: https://lore.kernel.org/netdev/20231208005250.2910004-1-almasrymina@google.com/ Signed-off-by: Mina Almasry --- v3: - Modify struct netmem from a union of struct page + new types to an opaque netmem_ref type. I went with: +typedef void *__bitwise netmem_ref; rather than this that Jakub recommended: +typedef unsigned long __bitwise netmem_ref; Because with the latter the compiler issues warnings to cast NULL to netmem_ref. I hope that's ok. - Add some function docs. v2: - Use container_of instead of a type cast (David). --- include/net/netmem.h | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) create mode 100644 include/net/netmem.h diff --git a/include/net/netmem.h b/include/net/netmem.h new file mode 100644 index 000000000000..edd977326203 --- /dev/null +++ b/include/net/netmem.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Network memory + * + * Author: Mina Almasry + */ + +#ifndef _NET_NETMEM_H +#define _NET_NETMEM_H + +/** + * typedef netmem_ref - a nonexistent type marking a reference to generic + * network memory. + * + * A netmem_ref currently is always a reference to a struct page. This + * abstraction is introduced so support for new memory types can be added. + * + * Use the supplied helpers to obtain the underlying memory pointer and fields. + */ +typedef void *__bitwise netmem_ref; + +/* This conversion fails (returns NULL) if the netmem_ref is not struct page + * backed. + * + * Currently struct page is the only possible netmem, and this helper never + * fails. + */ +static inline struct page *netmem_to_page(netmem_ref netmem) +{ + return (struct page *)netmem; +} + +/* Converting from page to netmem is always safe, because a page can always be + * a netmem. + */ +static inline netmem_ref page_to_netmem(struct page *page) +{ + return (netmem_ref)page; +} + +#endif /* _NET_NETMEM_H */ From patchwork Wed Dec 20 21:45:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 181798 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp46345dyi; Wed, 20 Dec 2023 14:26:54 -0800 (PST) X-Google-Smtp-Source: AGHT+IE6EdyuAXpNssDgqsxZH44ZeCZioiDoAXq0+NmIxc65TRRanWdPT2OWiHsz098X/c2py72C X-Received: by 2002:a05:6a20:7b07:b0:18c:9855:e949 with SMTP id s7-20020a056a207b0700b0018c9855e949mr402564pzh.15.1703111214272; Wed, 20 Dec 2023 14:26:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703111214; cv=none; d=google.com; s=arc-20160816; b=YqPUNu1xfzv0ZxvwJBgP8BffbZXaCW93zi1bGHhSVN8WpQyTC5ctKSnE5whS1lF5VX DHbDz1aC4NN5omay12GmbOXqhNp26Zj0D0tOSNGsXmd46dVfubXnKOtkPXOyhzviWwdo Rt7K6WPBudQPqoQ1ansKun5ecxJ6DB9qbqtBehJJEbrtaikaVgEhiAPJt+Y/WhDa6UDZ FNkr74ojyuPSeop7agw9B+nrgRM+GGbG3jfs6Rkvh/wYG7anHG4BhbGVOLF2MxFODVzG fVw9keI0lVOg49+EkfznKy+gGdgZ5seeYL/86KnbydqzTJfk6LHEkC5sB9X4ame8Kzev 3v+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=pSfGiYEOSrWhT7vJRbksaXM9O2Vnhi91teaNhuWoGzQ=; fh=JlPK37Up5EiBiZMpR+3rbUHDZrxBspzZ+EmMm14SzR0=; b=FGI4+MNEnOyDk9Wc+9TVy7nMPPxnSoLe8GNOyn7vrGpkQ6f+NojjEydfQcGRbSy5r6 E7kvd0iZbp9aPDni9It8ksp+iy9GVr5RV3R7O5XSN2QS4VDRINq4U7y88u0fgP6Kq5LG b08xLbNgsb04p8/7402ea7OijtTn4QKbVULqlN8XCVxRjTD7wOH0iizMnZ4TyohHrmoN rtyQPCIWl0sSkYtu2IJyrJ4nOveQB36Cqzy3vroSs4xIQA+3ueeWtPR6m4zuXNmBySff jYWkSz6BLIzCza8lYaLGDv3rcjeHxbYY/2DBldWQLZRARNEzEbinlO8VqqgV91UzwUgH SeQA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=BY6OouGL; spf=pass (google.com: domain of linux-kernel+bounces-7479-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7479-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id f28-20020a63511c000000b005c6248a0cd8si419900pgb.23.2023.12.20.14.26.54 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 14:26:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-7479-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=BY6OouGL; spf=pass (google.com: domain of linux-kernel+bounces-7479-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7479-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 2488728B88E for ; Wed, 20 Dec 2023 21:46:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1C2134C3B8; Wed, 20 Dec 2023 21:45:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BY6OouGL" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD3B04B5B8 for ; Wed, 20 Dec 2023 21:45:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5e526de698cso3515517b3.0 for ; Wed, 20 Dec 2023 13:45:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703108715; x=1703713515; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pSfGiYEOSrWhT7vJRbksaXM9O2Vnhi91teaNhuWoGzQ=; b=BY6OouGLQ9V09tE/dFPUW0q05+pOWtzj2hkIYhPkKEzQMmcx4trQZsT7Rzxhh/l4F+ 2C49xoHY6BMVX5APxVYBOusW8fwZvR8KZLHsxiv90qjbvxfh58JIq+cEYsRlTT3IKwzR UJfMBflF5f8TuNyMC5kWMTZAxXfOO3KvD8T1G3uAa398ZyTfSsN2kOqhLdf68/a//F/Y tRu1IbYX5Fa4J5TjCzFYJr7jhS3ynLHFNbdeqbdq2FD/k4qdd38QL2YHzom8+xypIAu7 B9sH++v6OCyWlZIRRfm2zOu5XaQC/vV6D8AlPn+UnxcQJmVwoLEKiWDdfmVMFPGW+SlX KyNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108715; x=1703713515; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pSfGiYEOSrWhT7vJRbksaXM9O2Vnhi91teaNhuWoGzQ=; b=bcRbYrEH9fukhh+JaOKka5XaqMHMb9A3US0gG8/yrfUf+yo2Ytk99xI6VfH00VGA0Z WTISQtfOvJdxCtyQSNgFtBbNrRQAPwglMBVK1t9xg3M42qQgu2pI+QUzbYrtw5jIEL3l CBGLwO+j4eEQPuXSD6ZF5whVenvWgjklgBd9tbIOXcEuMEfqU+D32bbQGigVal1Ng/u5 C5rIOYZdAMtOnc6ZlD37OFngwmd40AI1icEXJftz/AjIgvvpbGVFq/V6xwhGFtaH6cEl gzokXiz691+HDh7eQ4KthoLl59mbsXVpZ1pyWg7bPGeLA6eZ9agL+pvwJTe8f3658pp5 Hlaw== X-Gm-Message-State: AOJu0YwbFYHG6LkKyKOjOyTgFnpE5CCiz3Cv7e5J3gEqsN2p6xMQDa/a BVyfOkSO7MEfplOKcCubPCW4vaNA7EOwBCgVj5Q8to9j5aaYoifNPyC1pDbxzgumSkcIyn+1a/4 SPlGXODNqMnUQUSVdxXqjDEPklvHCH8vwe3VlX/KcjY+fNVnapyC7i2Sl4JGjoTtxll/sKnUL/B D4V+uavX8= X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:13cc:a33:a435:3fe9]) (user=almasrymina job=sendgmr) by 2002:a05:690c:3510:b0:5d7:29d7:8a35 with SMTP id fq16-20020a05690c351000b005d729d78a35mr207421ywb.5.1703108714801; Wed, 20 Dec 2023 13:45:14 -0800 (PST) Date: Wed, 20 Dec 2023 13:45:02 -0800 In-Reply-To: <20231220214505.2303297-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231220214505.2303297-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231220214505.2303297-4-almasrymina@google.com> Subject: [PATCH net-next v3 3/3] net: add netmem_ref to skb_frag_t From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Stefan Hajnoczi , Stefano Garzarella , David Howells , Jason Gunthorpe , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Yunsheng Lin , Willem de Bruijn X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785841544885784216 X-GMAIL-MSGID: 1785841544885784216 Use netmem_ref instead of page in skb_frag_t. Currently netmem_ref is always a struct page underneath, but the abstraction allows efforts to add support for skb frags not backed by pages. There is unfortunately 1 instance where the skb_frag_t is assumed to be a bio_vec in kcm. For this case, add a debug assert that the skb frag is indeed backed by a page, and do a cast. Add skb[_frag]_fill_netmem_*() and skb_add_rx_frag_netmem() helpers so that the API can be used to create netmem skbs. Signed-off-by: Mina Almasry --- v3; - Renamed the fields in skb_frag_t. v2: - Add skb frag filling helpers. --- include/linux/skbuff.h | 92 +++++++++++++++++++++++++++++------------- net/core/skbuff.c | 22 +++++++--- net/kcm/kcmsock.c | 10 ++++- 3 files changed, 89 insertions(+), 35 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 7ce38874dbd1..729c95e97be1 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,7 @@ #endif #include #include +#include /** * DOC: skb checksums @@ -359,7 +360,11 @@ extern int sysctl_max_skb_frags; */ #define GSO_BY_FRAGS 0xFFFF -typedef struct bio_vec skb_frag_t; +typedef struct skb_frag { + netmem_ref netmem; + unsigned int len; + unsigned int offset; +} skb_frag_t; /** * skb_frag_size() - Returns the size of a skb fragment @@ -367,7 +372,7 @@ typedef struct bio_vec skb_frag_t; */ static inline unsigned int skb_frag_size(const skb_frag_t *frag) { - return frag->bv_len; + return frag->len; } /** @@ -377,7 +382,7 @@ static inline unsigned int skb_frag_size(const skb_frag_t *frag) */ static inline void skb_frag_size_set(skb_frag_t *frag, unsigned int size) { - frag->bv_len = size; + frag->len = size; } /** @@ -387,7 +392,7 @@ static inline void skb_frag_size_set(skb_frag_t *frag, unsigned int size) */ static inline void skb_frag_size_add(skb_frag_t *frag, int delta) { - frag->bv_len += delta; + frag->len += delta; } /** @@ -397,7 +402,7 @@ static inline void skb_frag_size_add(skb_frag_t *frag, int delta) */ static inline void skb_frag_size_sub(skb_frag_t *frag, int delta) { - frag->bv_len -= delta; + frag->len -= delta; } /** @@ -417,7 +422,7 @@ static inline bool skb_frag_must_loop(struct page *p) * skb_frag_foreach_page - loop over pages in a fragment * * @f: skb frag to operate on - * @f_off: offset from start of f->bv_page + * @f_off: offset from start of f->netmem * @f_len: length from f_off to loop over * @p: (temp var) current page * @p_off: (temp var) offset from start of current page, @@ -2431,22 +2436,37 @@ static inline unsigned int skb_pagelen(const struct sk_buff *skb) return skb_headlen(skb) + __skb_pagelen(skb); } +static inline void skb_frag_fill_netmem_desc(skb_frag_t *frag, + netmem_ref netmem, int off, + int size) +{ + frag->netmem = netmem; + frag->offset = off; + skb_frag_size_set(frag, size); +} + static inline void skb_frag_fill_page_desc(skb_frag_t *frag, struct page *page, int off, int size) { - frag->bv_page = page; - frag->bv_offset = off; - skb_frag_size_set(frag, size); + skb_frag_fill_netmem_desc(frag, page_to_netmem(page), off, size); +} + +static inline void __skb_fill_netmem_desc_noacc(struct skb_shared_info *shinfo, + int i, netmem_ref netmem, + int off, int size) +{ + skb_frag_t *frag = &shinfo->frags[i]; + + skb_frag_fill_netmem_desc(frag, netmem, off, size); } static inline void __skb_fill_page_desc_noacc(struct skb_shared_info *shinfo, int i, struct page *page, int off, int size) { - skb_frag_t *frag = &shinfo->frags[i]; - - skb_frag_fill_page_desc(frag, page, off, size); + __skb_fill_netmem_desc_noacc(shinfo, i, page_to_netmem(page), off, + size); } /** @@ -2462,10 +2482,10 @@ static inline void skb_len_add(struct sk_buff *skb, int delta) } /** - * __skb_fill_page_desc - initialise a paged fragment in an skb + * __skb_fill_netmem_desc - initialise a fragment in an skb * @skb: buffer containing fragment to be initialised - * @i: paged fragment index to initialise - * @page: the page to use for this fragment + * @i: fragment index to initialise + * @netmem: the netmem to use for this fragment * @off: the offset to the data with @page * @size: the length of the data * @@ -2474,10 +2494,13 @@ static inline void skb_len_add(struct sk_buff *skb, int delta) * * Does not take any additional reference on the fragment. */ -static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, - struct page *page, int off, int size) +static inline void __skb_fill_netmem_desc(struct sk_buff *skb, int i, + netmem_ref netmem, int off, + int size) { - __skb_fill_page_desc_noacc(skb_shinfo(skb), i, page, off, size); + struct page *page = netmem_to_page(netmem); + + __skb_fill_netmem_desc_noacc(skb_shinfo(skb), i, netmem, off, size); /* Propagate page pfmemalloc to the skb if we can. The problem is * that not all callers have unique ownership of the page but rely @@ -2485,7 +2508,21 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, */ page = compound_head(page); if (page_is_pfmemalloc(page)) - skb->pfmemalloc = true; + skb->pfmemalloc = true; +} + +static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, + struct page *page, int off, int size) +{ + __skb_fill_netmem_desc(skb, i, page_to_netmem(page), off, size); +} + +static inline void skb_fill_netmem_desc(struct sk_buff *skb, int i, + netmem_ref netmem, int off, + int size) +{ + __skb_fill_netmem_desc(skb, i, netmem, off, size); + skb_shinfo(skb)->nr_frags = i + 1; } /** @@ -2505,8 +2542,7 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, static inline void skb_fill_page_desc(struct sk_buff *skb, int i, struct page *page, int off, int size) { - __skb_fill_page_desc(skb, i, page, off, size); - skb_shinfo(skb)->nr_frags = i + 1; + skb_fill_netmem_desc(skb, i, page_to_netmem(page), off, size); } /** @@ -2532,6 +2568,8 @@ static inline void skb_fill_page_desc_noacc(struct sk_buff *skb, int i, void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off, int size, unsigned int truesize); +void skb_add_rx_frag_netmem(struct sk_buff *skb, int i, netmem_ref netmem, + int off, int size, unsigned int truesize); void skb_coalesce_rx_frag(struct sk_buff *skb, int i, int size, unsigned int truesize); @@ -3380,7 +3418,7 @@ static inline void skb_propagate_pfmemalloc(const struct page *page, */ static inline unsigned int skb_frag_off(const skb_frag_t *frag) { - return frag->bv_offset; + return frag->offset; } /** @@ -3390,7 +3428,7 @@ static inline unsigned int skb_frag_off(const skb_frag_t *frag) */ static inline void skb_frag_off_add(skb_frag_t *frag, int delta) { - frag->bv_offset += delta; + frag->offset += delta; } /** @@ -3400,7 +3438,7 @@ static inline void skb_frag_off_add(skb_frag_t *frag, int delta) */ static inline void skb_frag_off_set(skb_frag_t *frag, unsigned int offset) { - frag->bv_offset = offset; + frag->offset = offset; } /** @@ -3411,7 +3449,7 @@ static inline void skb_frag_off_set(skb_frag_t *frag, unsigned int offset) static inline void skb_frag_off_copy(skb_frag_t *fragto, const skb_frag_t *fragfrom) { - fragto->bv_offset = fragfrom->bv_offset; + fragto->offset = fragfrom->offset; } /** @@ -3422,7 +3460,7 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + return netmem_to_page(frag->netmem); } /** @@ -3526,7 +3564,7 @@ static inline void *skb_frag_address_safe(const skb_frag_t *frag) static inline void skb_frag_page_copy(skb_frag_t *fragto, const skb_frag_t *fragfrom) { - fragto->bv_page = fragfrom->bv_page; + fragto->netmem = fragfrom->netmem; } bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4d4b11b0a83d..8b55e927bbe9 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -845,16 +845,24 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, } EXPORT_SYMBOL(__napi_alloc_skb); -void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off, - int size, unsigned int truesize) +void skb_add_rx_frag_netmem(struct sk_buff *skb, int i, netmem_ref netmem, + int off, int size, unsigned int truesize) { DEBUG_NET_WARN_ON_ONCE(size > truesize); - skb_fill_page_desc(skb, i, page, off, size); + skb_fill_netmem_desc(skb, i, netmem, off, size); skb->len += size; skb->data_len += size; skb->truesize += truesize; } +EXPORT_SYMBOL(skb_add_rx_frag_netmem); + +void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off, + int size, unsigned int truesize) +{ + skb_add_rx_frag_netmem(skb, i, page_to_netmem(page), off, size, + truesize); +} EXPORT_SYMBOL(skb_add_rx_frag); void skb_coalesce_rx_frag(struct sk_buff *skb, int i, int size, @@ -1904,10 +1912,11 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) /* skb frags point to kernel buffers */ for (i = 0; i < new_frags - 1; i++) { - __skb_fill_page_desc(skb, i, head, 0, psize); + __skb_fill_netmem_desc(skb, i, page_to_netmem(head), 0, psize); head = (struct page *)page_private(head); } - __skb_fill_page_desc(skb, new_frags - 1, head, 0, d_off); + __skb_fill_netmem_desc(skb, new_frags - 1, page_to_netmem(head), 0, + d_off); skb_shinfo(skb)->nr_frags = new_frags; release: @@ -3645,7 +3654,8 @@ skb_zerocopy(struct sk_buff *to, struct sk_buff *from, int len, int hlen) if (plen) { page = virt_to_head_page(from->head); offset = from->data - (unsigned char *)page_address(page); - __skb_fill_page_desc(to, 0, page, offset, plen); + __skb_fill_netmem_desc(to, 0, page_to_netmem(page), + offset, plen); get_page(page); j = 1; len -= plen; diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index 65d1f6755f98..3180a54b2c68 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -636,9 +636,15 @@ static int kcm_write_msgs(struct kcm_sock *kcm) for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) msize += skb_shinfo(skb)->frags[i].bv_len; + /* The cast to struct bio_vec* here assumes the frags are + * struct page based. WARN if there is no page in this skb. + */ + DEBUG_NET_WARN_ON_ONCE( + !skb_frag_page(&skb_shinfo(skb)->frags[0])); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, - skb_shinfo(skb)->frags, skb_shinfo(skb)->nr_frags, - msize); + (const struct bio_vec *)skb_shinfo(skb)->frags, + skb_shinfo(skb)->nr_frags, msize); iov_iter_advance(&msg.msg_iter, txm->frag_offset); do {