From patchwork Thu Dec 14 02:05:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 178409 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp8267622dys; Wed, 13 Dec 2023 18:06:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IFZWf1404CtbHbxnYD0XIzkqPIU+Zl1QS8xRAtuMqAI2owX0klKp4+a3oIDF8dj24Yv+3AY X-Received: by 2002:a05:6358:99a0:b0:170:2e2f:8c8 with SMTP id j32-20020a05635899a000b001702e2f08c8mr11554195rwb.5.1702519583714; Wed, 13 Dec 2023 18:06:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702519583; cv=none; d=google.com; s=arc-20160816; b=fCh/yNumbHOBnRCDGWVJBJxk6GKDFMeSMps+HT23+cGga/QU0NhdxOEOrCTUcUkoqL zDBeDGFZDet3VgaXAjawSdqppyOXcvuUbzSOx+b89rcE2Anlz+5Zs6wPy/ka6YFi6QCm Y5z2ERRkCJz38RrlB8SeCeoZWq5Qivmskx+vBwxIlzj/BVc5DD6gROLiM0dgHlAwnxb3 M1af0vEidCDLGAp6fBScZi5Xg+zSpo6tdwA2nxAyJz6MvsqFwxoRNppmq7GP3TQuOQGB 1XCK++9hwOZiIipYtX0/sJZMWbrNJ0uCuZlntr6GRye1dac5jeWCTl74syqz/O4TEmAf /0YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=+ronbOySVVJCeCFXE+E452YTaKGaTM58JCsYafTPHk4=; fh=2cTpkOQdVNm4Kv9XaiKb49A4tmrQwCTD9dCd1XGCiM8=; b=QFkJ/XZOS2I/6vDtqd43JLmhk9sIfYy07aESwom8oCAniDO4B3KoU9/k2NVuRfcUKt UwiOYU62fe82IWF8rQqC5mEXvSRW7e6Bqq71cJPL0IOjwmmFZtfmbhWOkqnf4YcleWYI MUERAIzHR3XFtj+lTa+0G99yB94hbDVqg63NcVW/jkg/Ui0DSYnD4Ou0TGFl4UIOfGwd ZGuFld3nHuZX4gIqSehIAYrCmwQZpE+t4jQQDbmXL2VwyTXJPJKtAALkOg8toa9+NjRr hnsebPdG3zeawrXDT2bXAwbOKLybHBot9vKEI6JxdddpB820m+X62hhaYHnFEbyUHw+/ bO8A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=O2j4yHTF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id gt14-20020a17090af2ce00b00286c13656dcsi4023168pjb.187.2023.12.13.18.06.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 18:06:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=O2j4yHTF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 7A0568070669; Wed, 13 Dec 2023 18:05:45 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442960AbjLNCFd (ORCPT + 99 others); Wed, 13 Dec 2023 21:05:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234195AbjLNCFb (ORCPT ); Wed, 13 Dec 2023 21:05:31 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2616FE4 for ; Wed, 13 Dec 2023 18:05:38 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-dbcca990ee9so1811155276.0 for ; Wed, 13 Dec 2023 18:05:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519537; x=1703124337; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+ronbOySVVJCeCFXE+E452YTaKGaTM58JCsYafTPHk4=; b=O2j4yHTFdH4Gpg94umNd+/vXPVzeJMcj0A46SsXoBpcqoVIHwpjY6eXU/AdPlb1yUd 6779vyYmeDcnhCnDOOagQRJ2hdBCLXYU0Z0uCDcIqyibpqDwupCLm3GY4wkWV6U/yCmx kx8lCopO+tYLcm7ub7xqp1ePTyowyQ/kbUWT9D3DjcnwvF8YGCYCS2S9rIvqDOy8f5BC oQiuHbbOA1wEQDdN8cYGYdiZVATuGAiUa/5n8Dwd0CgYCUuirhxfd88m6Lx5ULI+hRbN GGnLMu5OvnMFoucjTH8IOENKXYUr2RhiyO4IWUvRvn8aIbvk7/0o4MsO3GN4KFLtxxHQ AbxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519537; x=1703124337; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+ronbOySVVJCeCFXE+E452YTaKGaTM58JCsYafTPHk4=; b=nr371bosPMmHkhbZpZneM77za902f6Psv9zAATHF6ZFNLmUUsopqoDxXM+S+NDZv0z EH9bxZRC5M0OMH2LFcEbro4MAo4SUydOH0w5CD/NTjVSwXEbJHu3AbKHSvQ87WdLoFuk AFHz28ST5fUp9pnZQ8zesa6Yk191X91LErNDcKaP7/m9EX6nGky89QKlwTlI4tbg3xgU rw8LiElIbdFyAZ7BghOYjk+hSEhHzp73FxPhA+UnGNmcn9CvgfHWk4NJDzwEfLwEcXA2 eeaq/Hb+LBQraB6FE8j8snvj+MQYa+4qZziBR7KLNLKOCGVeXo2o2ye17sDVl4ZtGIiW X/hg== X-Gm-Message-State: AOJu0Yxie/yrXO+ThdZwnnICu4wwVg5IaEJUfkqUyRkeZQ/U70C5Lh1m aN9NrVBlWp0IcQXPxk8EL5lAQ4VYEympndUMCq23ZAJnUFX9HtfDMghd+mkX6Zj7fZ7G2QTymxt 7Fd3yAtGNVc435A8deGH6Y4kM0nck6VqSs3JRMpcKjMDuu6n1iP+j2FOqgU2Tz4jPdUiTIt6ho4 K/Lk3styk= X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a05:6902:545:b0:dbc:e64a:ccd6 with SMTP id z5-20020a056902054500b00dbce64accd6mr1821ybs.9.1702519536540; Wed, 13 Dec 2023 18:05:36 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:24 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-2-almasrymina@google.com> Subject: [RFC PATCH net-next v1 1/4] vsock/virtio: use skb_frag_page() helper From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , " =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= " , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 13 Dec 2023 18:05:45 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785221174957885388 X-GMAIL-MSGID: 1785221174957885388 Minor fix for virtio: code wanting to access the page inside the skb should use skb_frag_page() helper, instead of accessing bv_page directly. This allows for extensions where the underlying memory is not a page. Signed-off-by: Mina Almasry Acked-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index af5bab1acee1..bd0b413dfa3f 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -153,7 +153,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) * 'virt_to_phys()' later to fill the buffer descriptor. * We don't touch memory at "virtual" address of this page. */ - va = page_to_virt(skb_frag->bv_page); + va = page_to_virt(skb_frag_page(skb_frag)); sg_init_one(sgs[out_sg], va + skb_frag->bv_offset, skb_frag->bv_len); From patchwork Thu Dec 14 02:05:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 178412 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp8267663dys; Wed, 13 Dec 2023 18:06:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IF2oU2K823wCBjwyAnucpPQFRGY+l8GQhahzbiOeGL1QBkPrFcMbGxD+8BU4BYdnuGtKuby X-Received: by 2002:a05:6871:14b:b0:203:1efd:44d3 with SMTP id z11-20020a056871014b00b002031efd44d3mr2610171oab.79.1702519588663; Wed, 13 Dec 2023 18:06:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702519588; cv=none; d=google.com; s=arc-20160816; b=kSVOpnuM6JQw0ZNVa6Chvc2uiE4CJulN1IziguoSioezcc7988jGanSbeLM5Bnsd6F Skho4UX5ZPLFEnNkWfKR1wcZtrLYjTtkXfQCUAGIqBpr5lKQa+gIdYPrLV/WbNxf3472 KRYItDCNRtsLquRS6kYv8P8tTflpW45CDKhqQYhR3UzYrinQGtlPtFpUYKWpHzx6mmo9 qcj83ilJmyX1m601xPXzVrniGI2A3Ts8b0FH5vTlicquy6UPwVD8/b23Y7C6So+d8n3e 5jMBj29RCGXf4pjafI8zHXZRkSCRbtofroNFlJt1LB39B/qXYGS9P+6strSLQiaYGV5y MsNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=KzTXKjA5KWsaYAjqKhDbOIpqWzWFT9AS368JXZ2MRs8=; fh=2cTpkOQdVNm4Kv9XaiKb49A4tmrQwCTD9dCd1XGCiM8=; b=dBNNSPbTJxC5KrVS90vJnsNsm24WdPsdgnkRZeN8Pl/DSMvw8O6MmPweYCH8Okym5i 4NaIENYf9x4d7IHrCXAcyn/8dRwi78Z0bEDLZDDQ9VDlNUVlEwwQB0SMyYuG03VtWZgH 3ycdKbKP8rccLX2h87dHqoLet2R8GYveo9kX7D67MNA18kS8WjIQt2i+YeNMyv9G+IK7 +v9h3svS9KDaRnXxHmUUOxefnrcSDXMWNHIj8/R0x83QCMtc1SL6n7/d2qiWnH9K8TE8 fAQ2HQL45Q7yF3kQtZ1hgrGM3IjBKotBvUj1sXU/Fev62hw4biK4SgqRlQ+VfEqKjHto 3CvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b="dvhIQC/v"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id by12-20020a056a02058c00b005c66c2d0a5csi10830831pgb.484.2023.12.13.18.06.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 18:06:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b="dvhIQC/v"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 95A1580C2563; Wed, 13 Dec 2023 18:06:25 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235353AbjLNCFq (ORCPT + 99 others); Wed, 13 Dec 2023 21:05:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229739AbjLNCFe (ORCPT ); Wed, 13 Dec 2023 21:05:34 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B5F1F4 for ; Wed, 13 Dec 2023 18:05:40 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dbc4f389835so6643954276.2 for ; Wed, 13 Dec 2023 18:05:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519539; x=1703124339; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KzTXKjA5KWsaYAjqKhDbOIpqWzWFT9AS368JXZ2MRs8=; b=dvhIQC/v1AHHBiu9xhrWi4kRmfpWMCzJe4P5d7bNTHkaEX/Qw7QPHgnupS2yzV1Ca6 65rtV+JwRqEnrclvbw34QOSyjJxt76awp7bgZTZmjNT74uxiopeWERjDxZEksP540jFX GCA+9GUSCr7ZRwCX03Nd1GUYiKKLj2IIZHiQJP6GlNIQMBVdSjUWp+HzDF8n4byMRohe l7IMSOTHQSFM5q7BoHS+SwbLYc9SzcANV2njO7MSHovSofSXzA8sQW6gut83NShKcioj g4OHMyif42YSMQysGddkTrRUGw0RA4XT2HZQR1i4ZAObE5pcSziEfhTUUOU7xfns0eCl 4JdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519539; x=1703124339; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KzTXKjA5KWsaYAjqKhDbOIpqWzWFT9AS368JXZ2MRs8=; b=v7nreXq3fMMIKPv0UdZwnZqvJhMdq35bLzQ/iOdBUbzGAg7yuKpELZB3/AHYK/fHw0 Y3DHtZ3siItzccNT8fHq8fiX9W5A3q+c0RWjFnpxvF2q49oW2WN6FWggXOM5QCWjwVoj G5/SY7pYstno5QRAC0U2Ro6LOb4bjiP84U8TNC2nzKTjgOr7THggOJ3UUq86hgnV1Xed NxEwmytzMXON8y7xIYNQW+zcoEPlMJxkC1NDm6e6xYTCcSsEtKadit5U/NzlpdSiKSUO e+BMCoiGln5m1kMKueLCB3vnvGDp7IL7FToDpZK2wcmCiJhPT+aW7QrlYEFKO5iv8svL OFIQ== X-Gm-Message-State: AOJu0YzFXi3Ooa2zqCq7h1+KhbJ6s0xKGiOlmbdfacQD9Ju1Pmwnkdfn LgP6KBlWXGwU5J5zKAiBnNns8/LcF6CsevK8e42J4i665UATDVGVnXwwlxw2nU0vP7/3sbO2p/u +paWHrp5kMqXitvtHRmh2DALmoSGj+eHrYf5ExN7nDjZlnSntxHca+bT76uq6gPy/mhye3GYE8P RkdMlW9SM= X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a25:abeb:0:b0:db4:5eed:8907 with SMTP id v98-20020a25abeb000000b00db45eed8907mr77688ybi.8.1702519538827; Wed, 13 Dec 2023 18:05:38 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:25 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-3-almasrymina@google.com> Subject: [RFC PATCH net-next v1 2/4] net: introduce abstraction for network memory From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , " =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= " , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Wed, 13 Dec 2023 18:06:25 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785221180153551022 X-GMAIL-MSGID: 1785221180153551022 Add the netmem_t type, an abstraction for network memory. To add support for new memory types to the net stack, we must first abstract the current memory type from the net stack. Currently parts of the net stack use struct page directly: - page_pool - drivers - skb_frag_t Originally the plan was to reuse struct page* for the new memory types, and to set the LSB on the page* to indicate it's not really a page. However, for compiler type checking we need to introduce a new type. netmem_t is introduced to abstract the underlying memory type. Currently it's a no-op abstraction that is always a struct page underneath. In parallel there is an undergoing effort to add support for devmem to the net stack: https://lore.kernel.org/netdev/20231208005250.2910004-1-almasrymina@google.com/ Signed-off-by: Mina Almasry --- include/net/netmem.h | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 include/net/netmem.h diff --git a/include/net/netmem.h b/include/net/netmem.h new file mode 100644 index 000000000000..e4309242d8be --- /dev/null +++ b/include/net/netmem.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * netmem.h + * Author: Mina Almasry + * Copyright (C) 2023 Google LLC + */ + +#ifndef _NET_NETMEM_H +#define _NET_NETMEM_H + +struct netmem { + union { + struct page page; + + /* Stub to prevent compiler implicitly converting from page* + * to netmem_t* and vice versa. + * + * Other memory type(s) net stack would like to support + * can be added to this union. + */ + void *addr; + }; +}; + +static inline struct page *netmem_to_page(struct netmem *netmem) +{ + return &netmem->page; +} + +static inline struct netmem *page_to_netmem(struct page *page) +{ + return (struct netmem *)page; +} + +#endif /* _NET_NETMEM_H */ From patchwork Thu Dec 14 02:05:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 178410 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp8267639dys; Wed, 13 Dec 2023 18:06:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IEfOekZBr4Ix7cTcuN6lcPzaFG84V8iZQiuEVE08e14yhaItyRAX3W6xxHGghWyNw0qwm9w X-Received: by 2002:a17:903:26ce:b0:1cf:d2c3:2865 with SMTP id jg14-20020a17090326ce00b001cfd2c32865mr7950080plb.40.1702519585017; Wed, 13 Dec 2023 18:06:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702519585; cv=none; d=google.com; s=arc-20160816; b=cYvsFqK4gnwi0drIEiqrIiBN3nPC+HnyMhC1UKewV90MDFRKS70jGe4fHulTmLjfo1 EJFDE9DfeJ8pFtpqElolWdlAqEJpI2grQYPHhsnpkIAUVycwCxOayqUWQmQS/HSvbV9Y dGEj0uZan6JLFIBmlkqUj8TfPtYf4OcaLes40tPf+9P5xmkKAjQ9tJ6DbAenQ4gPYpKI Ldl+pV5zmNxEi9ToFtZ5d4zYW8jy4HVINgnw6yT8gehAbafPRDcX4wgxP1fL6RfR4SQF D5ysbgBMBjrTT81LI+TerZ3BSc2p5KJ8HJlELb1gLrk4mV7N62pWdtcDW+B/Cj9UKQW2 0B1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=dUMRyPJzc2KBqp9rlzOcicivZPWQYvb+p95UviyM3iw=; fh=2cTpkOQdVNm4Kv9XaiKb49A4tmrQwCTD9dCd1XGCiM8=; b=Cw718sUt4zVBXtCStzwIDU6jplg5iHGpgiaATMG2Q+RGgBUJHbyS+sdxNrZ51EqZVK rnlZUx4iJvdIbH2SguZXKg9x07OC6K8ZxDWWCxvf8RCbgih0fM95CRH5SZ4sP9hwWQdT BfGqhkJPO51MOiUs4HGrpoH9W5xaXtxVx1cPS+EF7HSxN9vDtiop4gSWl7eev49wir51 LHOUPSUs1PVJZd2fkA6RIO6JvG7NqR/253gPoxL6hXR03RfydT78IlDzNSBXfyjc9vZn lhuxBpn6ahHnfq6Pzi9ZOn71GIg0gYzRUeWDPvS2h7q8W2HSNgrJHpgSBpb/AQixX9ZD v/0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=fUN8P8ea; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id q18-20020a17090311d200b001d06d410ea0si6468859plh.448.2023.12.13.18.06.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 18:06:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=fUN8P8ea; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 8F46280C0575; Wed, 13 Dec 2023 18:06:00 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443024AbjLNCFu (ORCPT + 99 others); Wed, 13 Dec 2023 21:05:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235271AbjLNCFg (ORCPT ); Wed, 13 Dec 2023 21:05:36 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 903AC107 for ; Wed, 13 Dec 2023 18:05:42 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dbcc62c68fdso2156905276.2 for ; Wed, 13 Dec 2023 18:05:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519542; x=1703124342; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dUMRyPJzc2KBqp9rlzOcicivZPWQYvb+p95UviyM3iw=; b=fUN8P8eaFBYXz025ievhny71hWvob2+/oQllg6fLGdRdwjGqADeA7CYptTERsYLqsm DVspJIT/TkLY6wLUrC0k7gAf6DmLaERZ0lEfkVUuny03YesDuBch898IVJ3070T/0tWX ppAJnbY+q+wxKDZJTWWhVIQzWFmiu7dtq0sjeocWivEuoRHZwH2W2CtSMMcSVELXQ+rH 61GZxyTXzTCY5Fh90R/yb9ylV8Q/GOJmspmXlXNSUNxrE0bg/nvmONpKDfoMesic+Tnf DvUnTE6C0t1hx/nbOJcLR+m1t8iulLxmgE4fc0rRMYk19R3N7sLb4bDBBAJATNOy5qKm yAdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519542; x=1703124342; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dUMRyPJzc2KBqp9rlzOcicivZPWQYvb+p95UviyM3iw=; b=ORboojoMtj8elXL/Qlt+H8el5LSX059m3S2RT+WE9zWn/By7sAdjymNgP1o5PpZSJ3 gth7hQTyeoHPktvu7FtmMLZR7uIQ18AtDutTw9jNZCZXloH0MWskYTiubjjNN3RGNC2m BMo1SRKKHuE7L0+3XrvUFVsWa3HDqF9OKpicAPWsajP2l7PRHRQO1h73U7t30TcV2M7L 5BtURbzioAN8RkAZLqfzb35xe3xXoTts+u3RP0cfYexLb4pngPJVCfcBrFiQBLhXnbhI NlI3/ZqJDnW0UuvrJf1O2b5t6rAQwPzzE7cFkGGwP2MYdhpFUKHfv5Crwx+PmmlX/Cyu h24w== X-Gm-Message-State: AOJu0YwS+pZuHb9y0YSpCJLyCOVkwI7lZ9wQcfo4HHZ+8vTbVRkwlSQU Z94BdYfU2EOe3avQs2hfrBTYLTN7N9fzN2bRrP83ev8V6JgpJc5nY9qHOgMiL+REVRyb9jJurZZ pJVhZxwtAizJB0680NQQr2osxpxUtWogNvVrHfwAitFYNpi9SjFjoYix9YBFuQE93cN1G9+C0NJ wf18elB0E= X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a25:c0d4:0:b0:dbc:b692:65a7 with SMTP id c203-20020a25c0d4000000b00dbcb69265a7mr49497ybf.10.1702519540921; Wed, 13 Dec 2023 18:05:40 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:26 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-4-almasrymina@google.com> Subject: [RFC PATCH net-next v1 3/4] net: add netmem_t to skb_frag_t From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , " =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= " , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 13 Dec 2023 18:06:00 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785221176647773595 X-GMAIL-MSGID: 1785221176647773595 Use netmem_t instead of page directly in skb_frag_t. Currently netmem_t is always a struct page underneath, but the abstraction allows efforts to add support for skb frags not backed by pages. There is unfortunately 1 instance where the skb_frag_t is assumed to be a bio_vec in kcm. For this case, add a debug assert that the skb frag is indeed backed by a page, and do a cast. Signed-off-by: Mina Almasry --- include/linux/skbuff.h | 11 ++++++++--- net/kcm/kcmsock.c | 9 +++++++-- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b370eb8d70f7..6d681c40213c 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,7 @@ #endif #include #include +#include /** * DOC: skb checksums @@ -359,7 +360,11 @@ extern int sysctl_max_skb_frags; */ #define GSO_BY_FRAGS 0xFFFF -typedef struct bio_vec skb_frag_t; +typedef struct skb_frag { + struct netmem *bv_page; + unsigned int bv_len; + unsigned int bv_offset; +} skb_frag_t; /** * skb_frag_size() - Returns the size of a skb fragment @@ -2435,7 +2440,7 @@ static inline void skb_frag_fill_page_desc(skb_frag_t *frag, struct page *page, int off, int size) { - frag->bv_page = page; + frag->bv_page = page_to_netmem(page); frag->bv_offset = off; skb_frag_size_set(frag, size); } @@ -3422,7 +3427,7 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + return netmem_to_page(frag->bv_page); } /** diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index 65d1f6755f98..926349eeeaf6 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -636,9 +636,14 @@ static int kcm_write_msgs(struct kcm_sock *kcm) for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) msize += skb_shinfo(skb)->frags[i].bv_len; + /* The cast to struct bio_vec* here assumes the frags are + * struct page based. + */ + DEBUG_NET_WARN_ON_ONCE(!skb_frag_page(&skb_shinfo(skb)->frags[0])); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, - skb_shinfo(skb)->frags, skb_shinfo(skb)->nr_frags, - msize); + (const struct bio_vec *)skb_shinfo(skb)->frags, + skb_shinfo(skb)->nr_frags, msize); iov_iter_advance(&msg.msg_iter, txm->frag_offset); do { From patchwork Thu Dec 14 02:05:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 178411 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp8267643dys; Wed, 13 Dec 2023 18:06:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IEteJvLXj6gWsGSANNNLuqWLHT7shgAASc/sIwwCs+mMXT6pffOHziNwjDhgUwmPvmmlloZ X-Received: by 2002:a05:6a00:181f:b0:6ce:4587:4d7b with SMTP id y31-20020a056a00181f00b006ce45874d7bmr13045513pfa.24.1702519585438; Wed, 13 Dec 2023 18:06:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702519585; cv=none; d=google.com; s=arc-20160816; b=QS+kUTRlVJ+CVC2WU+UgF9r99xtuRfJ6mPFxBBwLVKk50v96Rn42O+LVxZ7ztZE+F1 WI+JI2k3YZ0CPE18VPVMg2u00WZl3pMRFoz1IROuYxUTZw9fHTiIDh0XDcFuozL/3eZT 8xdqrPWPtI/grwFvp31csmxLAnUX31IZV6UxjSZBJ8tiF3ZchlA2DHnlTNfNWCE+JzAA WESTxg6eQDDsXXN3NE1vz69elp2EpMPxZUkiig25mCKjSc07pADh2LgjVsOH2KNiXKE5 rSRkOkGqgaKyRGFyTBgnic6eD3vwUBbTZgJM3yUJh8x2EML8uvFtWv5b4eLBlQwREF+9 ezbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=wUHNM84aY8bG1LONwHsP22xk6JoFkZxhjeLJi0hGskg=; fh=2cTpkOQdVNm4Kv9XaiKb49A4tmrQwCTD9dCd1XGCiM8=; b=SaEeODyXQbdRo94PKf5Cgr9U9gOZmGkSSodsbE8PK4nnO1E+KIgMs5d70RBxnYUL5o CnlD6fq8rTTDJCJ7RowqaE59hyxPY7143zV+SNCXKkdTdw6bkTz7OAe7CFsrh6qaM9ku OIOlJJeNYwn2obTVxkn4kG+olqT1/4wWBVp9gLd6zjH24Qzu9bJeJyyowMjDyI4rEFoM v8z2sxNEFkX9fYV/o1AIoMyrSjjJjcpzdoJwvbNUhY2geQKPmkxkjdCDlOXGVIG2UgR3 8ffWhUUmrFVznJdkJQXqNEHFeyo5QN6A6JMky2u9kbUVctOgaTs+Eccz1umk8FMq3Gfx XC9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=LAmrN3gx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id c4-20020a63ef44000000b005c26e59bf98si10230080pgk.855.2023.12.13.18.06.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 18:06:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=LAmrN3gx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 1530580CB176; Wed, 13 Dec 2023 18:06:15 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235381AbjLNCFy (ORCPT + 99 others); Wed, 13 Dec 2023 21:05:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443057AbjLNCFo (ORCPT ); Wed, 13 Dec 2023 21:05:44 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 594E7114 for ; Wed, 13 Dec 2023 18:05:45 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5ddd64f83a4so62513987b3.0 for ; Wed, 13 Dec 2023 18:05:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702519544; x=1703124344; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wUHNM84aY8bG1LONwHsP22xk6JoFkZxhjeLJi0hGskg=; b=LAmrN3gxbJdfQaO8aXvQXgBgSYBuo3ooLBhYOpYj/N/v8u/yAUbfz4DyZZtxcDjIgc cs405hG0+in2EvxHu/yi0uSv0NyDz9pjCLruiVuAnIoo2C2XTqxAIPlva4DPxFSQLYdI uY0tSUlhftpYrMATcyXQmQzw7lIihzC2ko9iLGvKJV9qyKZBHMKe9Fy3IruUX++LVmxa 3qyPnwYYK1pEMP09AgLzC036gq3HRjwdII+DJaU1mjdVE4INIyl2qCjAUFBSoEv2JTv/ +1kM/KUaUV0EIUGdgMpl72t+ql2Vh02diW88GU2iE4Kt6fZzB4K7tDnQW3+UrEPonh/w bvNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702519544; x=1703124344; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wUHNM84aY8bG1LONwHsP22xk6JoFkZxhjeLJi0hGskg=; b=sH/aGUzKzIPgZxGPlTnVsZY2cQlJhYoPi95aifyCyqEh/yDNQD4R2QpyWfkcVyAOLK qJTc9DmZpc9p0WmFWrXlOh4NKC2lS9ljA5VMMxdBjWbxJMD7IFD3h/4+qgiHeC4bjE9h FsX/uzeTAqJKG1JtNz2AfZaCoHo1Nh24QtgojessPEBwbE05ucJItA/aRaqZMgwvKFiY Oj8kh9pptzD78x2pZ9pBwMDCUPMrvA6SyvWQjHakTSzT1abrQPQaAVwGU60ByqNGCrSb QFXJTavAVVcpGaNbomFLTOkOKu/1E/k58kCqsD9t6tn+P/uwgqzBxeBX4rewfQo6ZlKp bMfw== X-Gm-Message-State: AOJu0YwBZhfhFA31V7AtspGv3OuFfdwVdlOaSkrLeJnihY/HSZFSTJRX q4aIQk+Cwx4Vni7MAeHfR5QmgXejz0TlwAVS0DH34BPBhxAiscIgl41AlxCjTAGQ1OqQYoOEXV4 HBxQX2wnMPys9WRrziu9CI4ySNvEm1SPCuvQHp+cH2X/JJoDjxYOVA7bHhFXYM+Cjt03eyjvkqi kQAzBpbUc= X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:d31b:c1a:fb6a:2488]) (user=almasrymina job=sendgmr) by 2002:a5b:787:0:b0:d9c:801c:4230 with SMTP id b7-20020a5b0787000000b00d9c801c4230mr92069ybq.5.1702519543403; Wed, 13 Dec 2023 18:05:43 -0800 (PST) Date: Wed, 13 Dec 2023 18:05:27 -0800 In-Reply-To: <20231214020530.2267499-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231214020530.2267499-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214020530.2267499-5-almasrymina@google.com> Subject: [RFC PATCH net-next v1 4/4] net: page_pool: use netmem_t instead of struct page in API From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Mina Almasry , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Greg Kroah-Hartman , "Rafael J. Wysocki" , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , Shenwei Wang , Clark Wang , NXP Linux Team , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Yisen Zhuang , Salil Mehta , Jesse Brandeburg , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Saeed Mahameed , Leon Romanovsky , Horatiu Vultur , UNGLinuxDriver@microchip.com, "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Jassi Brar , Ilias Apalodimas , Alexandre Torgue , Jose Abreu , Maxime Coquelin , Siddharth Vadapalli , Ravi Gunasekaran , Roger Quadros , Jiawen Wu , Mengyuan Lou , Ronak Doshi , VMware PV-Drivers Reviewers , Ryder Lee , Shayne Chen , Kalle Valo , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Stefan Hajnoczi , Stefano Garzarella , Shuah Khan , " =?utf-8?q?Micka=C3=ABl_Sala=C3=BCn?= " , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jason Gunthorpe , Shakeel Butt , Yunsheng Lin , Willem de Bruijn X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Wed, 13 Dec 2023 18:06:16 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785221176813053736 X-GMAIL-MSGID: 1785221176813053736 Replace struct page in the page_pool API with the new netmem_t. Currently the changes are to the API layer only. The internals of the page_pool & drivers still convert the netmem_t to a page and use it regularly. Drivers that don't support other memory types than page can still use netmem_t as page only. Drivers that add support for other memory types such as devmem TCP will need to be modified to use the generic netmem_t rather than assuming the underlying memory is always a page. Similarly, the page_pool (and future pools) that add support for non-page memory will need to use the generic netmem_t. page_pools that only support one memory type (page or otherwise) can use that memory type internally, and convert it to netmem_t before delivering it to the driver for a more consistent API exposed to the drivers. Signed-off-by: Mina Almasry --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 15 ++-- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 8 ++- drivers/net/ethernet/engleder/tsnep_main.c | 22 +++--- drivers/net/ethernet/freescale/fec_main.c | 33 ++++++--- .../net/ethernet/hisilicon/hns3/hns3_enet.c | 14 ++-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 2 +- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 15 ++-- drivers/net/ethernet/marvell/mvneta.c | 24 ++++--- .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 18 +++-- .../marvell/octeontx2/nic/otx2_common.c | 8 ++- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 22 +++--- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 27 ++++--- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 28 ++++---- .../ethernet/microchip/lan966x/lan966x_fdma.c | 16 +++-- drivers/net/ethernet/microsoft/mana/mana_en.c | 10 +-- drivers/net/ethernet/socionext/netsec.c | 25 ++++--- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 48 ++++++++----- drivers/net/ethernet/ti/cpsw.c | 11 +-- drivers/net/ethernet/ti/cpsw_new.c | 11 +-- drivers/net/ethernet/ti/cpsw_priv.c | 12 ++-- drivers/net/ethernet/wangxun/libwx/wx_lib.c | 18 +++-- drivers/net/veth.c | 5 +- drivers/net/vmxnet3/vmxnet3_drv.c | 7 +- drivers/net/vmxnet3/vmxnet3_xdp.c | 20 +++--- drivers/net/wireless/mediatek/mt76/dma.c | 4 +- drivers/net/wireless/mediatek/mt76/mt76.h | 5 +- .../net/wireless/mediatek/mt76/mt7915/mmio.c | 4 +- drivers/net/xen-netfront.c | 4 +- include/net/page_pool/helpers.h | 72 ++++++++++--------- include/net/page_pool/types.h | 9 +-- net/bpf/test_run.c | 2 +- net/core/page_pool.c | 39 +++++----- net/core/skbuff.c | 2 +- net/core/xdp.c | 3 +- 34 files changed, 330 insertions(+), 233 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index be3fa0545fdc..9e37da8ed389 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -807,16 +807,17 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, struct page *page; if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - page = page_pool_dev_alloc_frag(rxr->page_pool, offset, - BNXT_RX_PAGE_SIZE); + page = netmem_to_page(page_pool_dev_alloc_frag(rxr->page_pool, + offset, + BNXT_RX_PAGE_SIZE)); } else { - page = page_pool_dev_alloc_pages(rxr->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rxr->page_pool)); *offset = 0; } if (!page) return NULL; - *mapping = page_pool_get_dma_addr(page) + *offset; + *mapping = page_pool_get_dma_addr(page_to_netmem(page)) + *offset; return page; } @@ -1040,7 +1041,7 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp, bp->rx_dir); skb = napi_build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); if (!skb) { - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); return NULL; } skb_mark_for_recycle(skb); @@ -1078,7 +1079,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, skb = napi_alloc_skb(&rxr->bnapi->napi, payload); if (!skb) { - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); return NULL; } @@ -3283,7 +3284,7 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) rx_agg_buf->page = NULL; __clear_bit(i, rxr->rx_agg_bmap); - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); } skip_rx_agg_free: diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 037624f17aea..3b6b09f835e4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -161,7 +161,8 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) for (j = 0; j < frags; j++) { tx_cons = NEXT_TX(tx_cons); tx_buf = &txr->tx_buf_ring[RING_TX(bp, tx_cons)]; - page_pool_recycle_direct(rxr->page_pool, tx_buf->page); + page_pool_recycle_direct(rxr->page_pool, + page_to_netmem(tx_buf->page)); } } else { bnxt_sched_reset_txr(bp, txr, tx_cons); @@ -219,7 +220,7 @@ void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, for (i = 0; i < shinfo->nr_frags; i++) { struct page *page = skb_frag_page(&shinfo->frags[i]); - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, page_to_netmem(page)); } shinfo->nr_frags = 0; } @@ -320,7 +321,8 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons, if (xdp_do_redirect(bp->dev, &xdp, xdp_prog)) { trace_xdp_exception(bp->dev, xdp_prog, act); - page_pool_recycle_direct(rxr->page_pool, page); + page_pool_recycle_direct(rxr->page_pool, + page_to_netmem(page)); return true; } diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c index df40c720e7b2..ce32dcf7c6f8 100644 --- a/drivers/net/ethernet/engleder/tsnep_main.c +++ b/drivers/net/ethernet/engleder/tsnep_main.c @@ -641,7 +641,7 @@ static int tsnep_xdp_tx_map(struct xdp_frame *xdpf, struct tsnep_tx *tx, } else { page = unlikely(frag) ? skb_frag_page(frag) : virt_to_page(xdpf->data); - dma = page_pool_get_dma_addr(page); + dma = page_pool_get_dma_addr(page_to_netmem(page)); if (unlikely(frag)) dma += skb_frag_off(frag); else @@ -940,7 +940,8 @@ static void tsnep_rx_ring_cleanup(struct tsnep_rx *rx) for (i = 0; i < TSNEP_RING_SIZE; i++) { entry = &rx->entry[i]; if (!rx->xsk_pool && entry->page) - page_pool_put_full_page(rx->page_pool, entry->page, + page_pool_put_full_page(rx->page_pool, + page_to_netmem(entry->page), false); if (rx->xsk_pool && entry->xdp) xsk_buff_free(entry->xdp); @@ -1066,7 +1067,8 @@ static void tsnep_rx_free_page_buffer(struct tsnep_rx *rx) */ page = rx->page_buffer; while (*page) { - page_pool_put_full_page(rx->page_pool, *page, false); + page_pool_put_full_page(rx->page_pool, page_to_netmem(*page), + false); *page = NULL; page++; } @@ -1080,7 +1082,8 @@ static int tsnep_rx_alloc_page_buffer(struct tsnep_rx *rx) * be filled completely */ for (i = 0; i < TSNEP_RING_SIZE - 1; i++) { - rx->page_buffer[i] = page_pool_dev_alloc_pages(rx->page_pool); + rx->page_buffer[i] = + netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (!rx->page_buffer[i]) { tsnep_rx_free_page_buffer(rx); @@ -1096,7 +1099,7 @@ static void tsnep_rx_set_page(struct tsnep_rx *rx, struct tsnep_rx_entry *entry, { entry->page = page; entry->len = TSNEP_MAX_RX_BUF_SIZE; - entry->dma = page_pool_get_dma_addr(entry->page); + entry->dma = page_pool_get_dma_addr(page_to_netmem(entry->page)); entry->desc->rx = __cpu_to_le64(entry->dma + TSNEP_RX_OFFSET); } @@ -1105,7 +1108,7 @@ static int tsnep_rx_alloc_buffer(struct tsnep_rx *rx, int index) struct tsnep_rx_entry *entry = &rx->entry[index]; struct page *page; - page = page_pool_dev_alloc_pages(rx->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (unlikely(!page)) return -ENOMEM; tsnep_rx_set_page(rx, entry, page); @@ -1296,7 +1299,8 @@ static bool tsnep_xdp_run_prog(struct tsnep_rx *rx, struct bpf_prog *prog, sync = xdp->data_end - xdp->data_hard_start - XDP_PACKET_HEADROOM; sync = max(sync, length); - page_pool_put_page(rx->page_pool, virt_to_head_page(xdp->data), + page_pool_put_page(rx->page_pool, + page_to_netmem(virt_to_head_page(xdp->data)), sync, true); return true; } @@ -1400,7 +1404,7 @@ static void tsnep_rx_page(struct tsnep_rx *rx, struct napi_struct *napi, napi_gro_receive(napi, skb); } else { - page_pool_recycle_direct(rx->page_pool, page); + page_pool_recycle_direct(rx->page_pool, page_to_netmem(page)); rx->dropped++; } @@ -1599,7 +1603,7 @@ static int tsnep_rx_poll_zc(struct tsnep_rx *rx, struct napi_struct *napi, } } - page = page_pool_dev_alloc_pages(rx->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (page) { memcpy(page_address(page) + TSNEP_RX_OFFSET, entry->xdp->data - TSNEP_RX_INLINE_METADATA_SIZE, diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index bae9536de767..4da3e6161a73 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -996,7 +996,9 @@ static void fec_enet_bd_init(struct net_device *dev) struct page *page = txq->tx_buf[i].buf_p; if (page) - page_pool_put_page(page->pp, page, 0, false); + page_pool_put_page(page->pp, + page_to_netmem(page), + 0, false); } txq->tx_buf[i].buf_p = NULL; @@ -1520,7 +1522,8 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id, int budget) xdp_return_frame_rx_napi(xdpf); } else { /* recycle pages of XDP_TX frames */ /* The dma_sync_size = 0 as XDP_TX has already synced DMA for_device */ - page_pool_put_page(page->pp, page, 0, true); + page_pool_put_page(page->pp, page_to_netmem(page), 0, + true); } txq->tx_buf[index].buf_p = NULL; @@ -1568,12 +1571,13 @@ static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, struct page *new_page; dma_addr_t phys_addr; - new_page = page_pool_dev_alloc_pages(rxq->page_pool); + new_page = netmem_to_page(page_pool_dev_alloc_pages(rxq->page_pool)); WARN_ON(!new_page); rxq->rx_skb_info[index].page = new_page; rxq->rx_skb_info[index].offset = FEC_ENET_XDP_HEADROOM; - phys_addr = page_pool_get_dma_addr(new_page) + FEC_ENET_XDP_HEADROOM; + phys_addr = page_pool_get_dma_addr(page_to_netmem(new_page)) + + FEC_ENET_XDP_HEADROOM; bdp->cbd_bufaddr = cpu_to_fec32(phys_addr); } @@ -1633,7 +1637,8 @@ fec_enet_run_xdp(struct fec_enet_private *fep, struct bpf_prog *prog, xdp_err: ret = FEC_ENET_XDP_CONSUMED; page = virt_to_head_page(xdp->data); - page_pool_put_page(rxq->page_pool, page, sync, true); + page_pool_put_page(rxq->page_pool, page_to_netmem(page), sync, + true); if (act != XDP_DROP) trace_xdp_exception(fep->netdev, prog, act); break; @@ -1761,7 +1766,8 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id) */ skb = build_skb(page_address(page), PAGE_SIZE); if (unlikely(!skb)) { - page_pool_recycle_direct(rxq->page_pool, page); + page_pool_recycle_direct(rxq->page_pool, + page_to_netmem(page)); ndev->stats.rx_dropped++; netdev_err_once(ndev, "build_skb failed!\n"); @@ -3264,7 +3270,9 @@ static void fec_enet_free_buffers(struct net_device *ndev) for (q = 0; q < fep->num_rx_queues; q++) { rxq = fep->rx_queue[q]; for (i = 0; i < rxq->bd.ring_size; i++) - page_pool_put_full_page(rxq->page_pool, rxq->rx_skb_info[i].page, false); + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(rxq->rx_skb_info[i].page), + false); for (i = 0; i < XDP_STATS_TOTAL; i++) rxq->stats[i] = 0; @@ -3293,7 +3301,9 @@ static void fec_enet_free_buffers(struct net_device *ndev) } else { struct page *page = txq->tx_buf[i].buf_p; - page_pool_put_page(page->pp, page, 0, false); + page_pool_put_page(page->pp, + page_to_netmem(page), 0, + false); } txq->tx_buf[i].buf_p = NULL; @@ -3390,11 +3400,12 @@ fec_enet_alloc_rxq_buffers(struct net_device *ndev, unsigned int queue) } for (i = 0; i < rxq->bd.ring_size; i++) { - page = page_pool_dev_alloc_pages(rxq->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rxq->page_pool)); if (!page) goto err_alloc; - phys_addr = page_pool_get_dma_addr(page) + FEC_ENET_XDP_HEADROOM; + phys_addr = page_pool_get_dma_addr(page_to_netmem(page)) + + FEC_ENET_XDP_HEADROOM; bdp->cbd_bufaddr = cpu_to_fec32(phys_addr); rxq->rx_skb_info[i].page = page; @@ -3856,7 +3867,7 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep, struct page *page; page = virt_to_page(xdpb->data); - dma_addr = page_pool_get_dma_addr(page) + + dma_addr = page_pool_get_dma_addr(page_to_netmem(page)) + (xdpb->data - xdpb->data_hard_start); dma_sync_single_for_device(&fep->pdev->dev, dma_addr, dma_sync_len, DMA_BIDIRECTIONAL); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index b618797a7e8d..0ab015cb1b51 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -3371,15 +3371,15 @@ static int hns3_alloc_buffer(struct hns3_enet_ring *ring, struct page *p; if (ring->page_pool) { - p = page_pool_dev_alloc_frag(ring->page_pool, - &cb->page_offset, - hns3_buf_size(ring)); + p = netmem_to_page(page_pool_dev_alloc_frag(ring->page_pool, + &cb->page_offset, + hns3_buf_size(ring))); if (unlikely(!p)) return -ENOMEM; cb->priv = p; cb->buf = page_address(p); - cb->dma = page_pool_get_dma_addr(p); + cb->dma = page_pool_get_dma_addr(page_to_netmem(p)); cb->type = DESC_TYPE_PP_FRAG; cb->reuse_flag = 0; return 0; @@ -3411,7 +3411,8 @@ static void hns3_free_buffer(struct hns3_enet_ring *ring, if (cb->type & DESC_TYPE_PAGE && cb->pagecnt_bias) __page_frag_cache_drain(cb->priv, cb->pagecnt_bias); else if (cb->type & DESC_TYPE_PP_FRAG) - page_pool_put_full_page(ring->page_pool, cb->priv, + page_pool_put_full_page(ring->page_pool, + page_to_netmem(cb->priv), false); } memset(cb, 0, sizeof(*cb)); @@ -4058,7 +4059,8 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length, if (dev_page_is_reusable(desc_cb->priv)) desc_cb->reuse_flag = 1; else if (desc_cb->type & DESC_TYPE_PP_FRAG) - page_pool_put_full_page(ring->page_pool, desc_cb->priv, + page_pool_put_full_page(ring->page_pool, + page_to_netmem(desc_cb->priv), false); else /* This page cannot be reused so discard it */ __page_frag_cache_drain(desc_cb->priv, diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 1f728a9004d9..bcef8b49652a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -336,7 +336,7 @@ static void idpf_rx_page_rel(struct idpf_queue *rxq, struct idpf_rx_buf *rx_buf) if (unlikely(!rx_buf->page)) return; - page_pool_put_full_page(rxq->pp, rx_buf->page, false); + page_pool_put_full_page(rxq->pp, page_to_netmem(rx_buf->page), false); rx_buf->page = NULL; rx_buf->page_offset = 0; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index df76493faa75..5efe4920326b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -932,18 +932,19 @@ static inline dma_addr_t idpf_alloc_page(struct page_pool *pool, unsigned int buf_size) { if (buf_size == IDPF_RX_BUF_2048) - buf->page = page_pool_dev_alloc_frag(pool, &buf->page_offset, - buf_size); + buf->page = netmem_to_page(page_pool_dev_alloc_frag(pool, + &buf->page_offset, + buf_size)); else - buf->page = page_pool_dev_alloc_pages(pool); + buf->page = netmem_to_page(page_pool_dev_alloc_pages(pool)); if (!buf->page) return DMA_MAPPING_ERROR; buf->truesize = buf_size; - return page_pool_get_dma_addr(buf->page) + buf->page_offset + - pool->p.offset; + return page_pool_get_dma_addr(page_to_netmem(buf->page)) + + buf->page_offset + pool->p.offset; } /** @@ -952,7 +953,7 @@ static inline dma_addr_t idpf_alloc_page(struct page_pool *pool, */ static inline void idpf_rx_put_page(struct idpf_rx_buf *rx_buf) { - page_pool_put_page(rx_buf->page->pp, rx_buf->page, + page_pool_put_page(rx_buf->page->pp, page_to_netmem(rx_buf->page), rx_buf->truesize, true); rx_buf->page = NULL; } @@ -968,7 +969,7 @@ static inline void idpf_rx_sync_for_cpu(struct idpf_rx_buf *rx_buf, u32 len) struct page_pool *pp = page->pp; dma_sync_single_range_for_cpu(pp->p.dev, - page_pool_get_dma_addr(page), + page_pool_get_dma_addr(page_to_netmem(page)), rx_buf->page_offset + pp->p.offset, len, page_pool_get_dma_dir(pp)); } diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 29aac327574d..f20c09fa6764 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1940,12 +1940,13 @@ static int mvneta_rx_refill(struct mvneta_port *pp, dma_addr_t phys_addr; struct page *page; - page = page_pool_alloc_pages(rxq->page_pool, - gfp_mask | __GFP_NOWARN); + page = netmem_to_page(page_pool_alloc_pages(rxq->page_pool, + gfp_mask | __GFP_NOWARN)); if (!page) return -ENOMEM; - phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction; + phys_addr = page_pool_get_dma_addr(page_to_netmem(page)) + + pp->rx_offset_correction; mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); return 0; @@ -2013,7 +2014,8 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, if (!data || !(rx_desc->buf_phys_addr)) continue; - page_pool_put_full_page(rxq->page_pool, data, false); + page_pool_put_full_page(rxq->page_pool, page_to_netmem(data), + false); } if (xdp_rxq_info_is_reg(&rxq->xdp_rxq)) xdp_rxq_info_unreg(&rxq->xdp_rxq); @@ -2080,10 +2082,12 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, for (i = 0; i < sinfo->nr_frags; i++) page_pool_put_full_page(rxq->page_pool, - skb_frag_page(&sinfo->frags[i]), true); + page_to_netmem(skb_frag_page(&sinfo->frags[i])), + true); out: - page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), + page_pool_put_page(rxq->page_pool, + page_to_netmem(virt_to_head_page(xdp->data)), sync_len, true); } @@ -2132,7 +2136,7 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, } else { page = unlikely(frag) ? skb_frag_page(frag) : virt_to_page(xdpf->data); - dma_addr = page_pool_get_dma_addr(page); + dma_addr = page_pool_get_dma_addr(page_to_netmem(page)); if (unlikely(frag)) dma_addr += skb_frag_off(frag); else @@ -2386,7 +2390,8 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, if (page_is_pfmemalloc(page)) xdp_buff_set_frag_pfmemalloc(xdp); } else { - page_pool_put_full_page(rxq->page_pool, page, true); + page_pool_put_full_page(rxq->page_pool, page_to_netmem(page), + true); } *size -= len; } @@ -2471,7 +2476,8 @@ static int mvneta_rx_swbm(struct napi_struct *napi, } else { if (unlikely(!xdp_buf.data_hard_start)) { rx_desc->buf_phys_addr = 0; - page_pool_put_full_page(rxq->page_pool, page, + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(page), true); goto next; } diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 93137606869e..32ae784b1484 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -361,7 +361,7 @@ static void *mvpp2_frag_alloc(const struct mvpp2_bm_pool *pool, struct page_pool *page_pool) { if (page_pool) - return page_pool_dev_alloc_pages(page_pool); + return netmem_to_page(page_pool_dev_alloc_pages(page_pool)); if (likely(pool->frag_size <= PAGE_SIZE)) return netdev_alloc_frag(pool->frag_size); @@ -373,7 +373,9 @@ static void mvpp2_frag_free(const struct mvpp2_bm_pool *pool, struct page_pool *page_pool, void *data) { if (page_pool) - page_pool_put_full_page(page_pool, virt_to_head_page(data), false); + page_pool_put_full_page(page_pool, + page_to_netmem(virt_to_head_page(data)), + false); else if (likely(pool->frag_size <= PAGE_SIZE)) skb_free_frag(data); else @@ -750,7 +752,7 @@ static void *mvpp2_buf_alloc(struct mvpp2_port *port, if (page_pool) { page = (struct page *)data; - dma_addr = page_pool_get_dma_addr(page); + dma_addr = page_pool_get_dma_addr(page_to_netmem(page)); data = page_to_virt(page); } else { dma_addr = dma_map_single(port->dev->dev.parent, data, @@ -3687,7 +3689,7 @@ mvpp2_xdp_submit_frame(struct mvpp2_port *port, u16 txq_id, /* XDP_TX */ struct page *page = virt_to_page(xdpf->data); - dma_addr = page_pool_get_dma_addr(page) + + dma_addr = page_pool_get_dma_addr(page_to_netmem(page)) + sizeof(*xdpf) + xdpf->headroom; dma_sync_single_for_device(port->dev->dev.parent, dma_addr, xdpf->len, DMA_BIDIRECTIONAL); @@ -3809,7 +3811,8 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct bpf_prog *prog, if (unlikely(err)) { ret = MVPP2_XDP_DROPPED; page = virt_to_head_page(xdp->data); - page_pool_put_page(pp, page, sync, true); + page_pool_put_page(pp, page_to_netmem(page), sync, + true); } else { ret = MVPP2_XDP_REDIR; stats->xdp_redirect++; @@ -3819,7 +3822,8 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct bpf_prog *prog, ret = mvpp2_xdp_xmit_back(port, xdp); if (ret != MVPP2_XDP_TX) { page = virt_to_head_page(xdp->data); - page_pool_put_page(pp, page, sync, true); + page_pool_put_page(pp, page_to_netmem(page), sync, + true); } break; default: @@ -3830,7 +3834,7 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct bpf_prog *prog, fallthrough; case XDP_DROP: page = virt_to_head_page(xdp->data); - page_pool_put_page(pp, page, sync, true); + page_pool_put_page(pp, page_to_netmem(page), sync, true); ret = MVPP2_XDP_DROPPED; stats->xdp_drop++; break; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 7ca6941ea0b9..bbff52a24cab 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -530,11 +530,12 @@ static int otx2_alloc_pool_buf(struct otx2_nic *pfvf, struct otx2_pool *pool, sz = SKB_DATA_ALIGN(pool->rbsize); sz = ALIGN(sz, OTX2_ALIGN); - page = page_pool_alloc_frag(pool->page_pool, &offset, sz, GFP_ATOMIC); + page = netmem_to_page(page_pool_alloc_frag(pool->page_pool, + &offset, sz, GFP_ATOMIC)); if (unlikely(!page)) return -ENOMEM; - *dma = page_pool_get_dma_addr(page) + offset; + *dma = page_pool_get_dma_addr(page_to_netmem(page)) + offset; return 0; } @@ -1208,7 +1209,8 @@ void otx2_free_bufs(struct otx2_nic *pfvf, struct otx2_pool *pool, page = virt_to_head_page(phys_to_virt(pa)); if (pool->page_pool) { - page_pool_put_full_page(pool->page_pool, page, true); + page_pool_put_full_page(pool->page_pool, page_to_netmem(page), + true); } else { dma_unmap_page_attrs(pfvf->dev, iova, size, DMA_FROM_DEVICE, diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index a6e91573f8da..68146071a919 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1735,11 +1735,13 @@ static void *mtk_page_pool_get_buff(struct page_pool *pp, dma_addr_t *dma_addr, { struct page *page; - page = page_pool_alloc_pages(pp, gfp_mask | __GFP_NOWARN); + page = netmem_to_page(page_pool_alloc_pages(pp, + gfp_mask | __GFP_NOWARN)); if (!page) return NULL; - *dma_addr = page_pool_get_dma_addr(page) + MTK_PP_HEADROOM; + *dma_addr = + page_pool_get_dma_addr(page_to_netmem(page)) + MTK_PP_HEADROOM; return page_address(page); } @@ -1747,7 +1749,8 @@ static void mtk_rx_put_buff(struct mtk_rx_ring *ring, void *data, bool napi) { if (ring->page_pool) page_pool_put_full_page(ring->page_pool, - virt_to_head_page(data), napi); + page_to_netmem(virt_to_head_page(data)), + napi); else skb_free_frag(data); } @@ -1771,7 +1774,7 @@ static int mtk_xdp_frame_map(struct mtk_eth *eth, struct net_device *dev, } else { struct page *page = virt_to_head_page(data); - txd_info->addr = page_pool_get_dma_addr(page) + + txd_info->addr = page_pool_get_dma_addr(page_to_netmem(page)) + sizeof(struct xdp_frame) + headroom; dma_sync_single_for_device(eth->dma_dev, txd_info->addr, txd_info->size, DMA_BIDIRECTIONAL); @@ -1985,7 +1988,8 @@ static u32 mtk_xdp_run(struct mtk_eth *eth, struct mtk_rx_ring *ring, } page_pool_put_full_page(ring->page_pool, - virt_to_head_page(xdp->data), true); + page_to_netmem(virt_to_head_page(xdp->data)), + true); update_stats: u64_stats_update_begin(&hw_stats->syncp); @@ -2074,8 +2078,9 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, } dma_sync_single_for_cpu(eth->dma_dev, - page_pool_get_dma_addr(page) + MTK_PP_HEADROOM, - pktlen, page_pool_get_dma_dir(ring->page_pool)); + page_pool_get_dma_addr(page_to_netmem(page)) + + MTK_PP_HEADROOM, + pktlen, page_pool_get_dma_dir(ring->page_pool)); xdp_init_buff(&xdp, PAGE_SIZE, &ring->xdp_q); xdp_prepare_buff(&xdp, data, MTK_PP_HEADROOM, pktlen, @@ -2092,7 +2097,8 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, skb = build_skb(data, PAGE_SIZE); if (unlikely(!skb)) { page_pool_put_full_page(ring->page_pool, - page, true); + page_to_netmem(page), + true); netdev->stats.rx_dropped++; goto skip_rx; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index e2e7d82cfca4..c8275e4b6cae 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -122,7 +122,8 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, * mode. */ - dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); + dma_addr = page_pool_get_dma_addr(page_to_netmem(page)) + + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd->len, DMA_BIDIRECTIONAL); if (xdptxd->has_frags) { @@ -134,8 +135,8 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, dma_addr_t addr; u32 len; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + - skb_frag_off(frag); + addr = page_pool_get_dma_addr(page_to_netmem(skb_frag_page(frag))) + + skb_frag_off(frag); len = skb_frag_size(frag); dma_sync_single_for_device(sq->pdev, addr, len, DMA_BIDIRECTIONAL); @@ -458,9 +459,12 @@ mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx tmp.data = skb_frag_address(frag); tmp.len = skb_frag_size(frag); - tmp.dma_addr = xdptxdf->dma_arr ? xdptxdf->dma_arr[0] : - page_pool_get_dma_addr(skb_frag_page(frag)) + - skb_frag_off(frag); + tmp.dma_addr = + xdptxdf->dma_arr ? + xdptxdf->dma_arr[0] : + page_pool_get_dma_addr(page_to_netmem( + skb_frag_page(frag))) + + skb_frag_off(frag); p = &tmp; } } @@ -607,9 +611,11 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, skb_frag_t *frag = &xdptxdf->sinfo->frags[i]; dma_addr_t addr; - addr = xdptxdf->dma_arr ? xdptxdf->dma_arr[i] : - page_pool_get_dma_addr(skb_frag_page(frag)) + - skb_frag_off(frag); + addr = xdptxdf->dma_arr ? + xdptxdf->dma_arr[i] : + page_pool_get_dma_addr(page_to_netmem( + skb_frag_page(frag))) + + skb_frag_off(frag); dseg->addr = cpu_to_be64(addr); dseg->byte_count = cpu_to_be32(skb_frag_size(frag)); @@ -699,7 +705,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) * as we know this is a page_pool page. */ - page_pool_recycle_direct(page->pp, page); + page_pool_recycle_direct(page->pp, + page_to_netmem(page)); } while (++n < num); break; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 8d9743a5e42c..73d41dc2b47e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -278,11 +278,11 @@ static int mlx5e_page_alloc_fragmented(struct mlx5e_rq *rq, { struct page *page; - page = page_pool_dev_alloc_pages(rq->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rq->page_pool)); if (unlikely(!page)) return -ENOMEM; - page_pool_fragment_page(page, MLX5E_PAGECNT_BIAS_MAX); + page_pool_fragment_page(page_to_netmem(page), MLX5E_PAGECNT_BIAS_MAX); *frag_page = (struct mlx5e_frag_page) { .page = page, @@ -298,8 +298,9 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq, u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; struct page *page = frag_page->page; - if (page_pool_defrag_page(page, drain_count) == 0) - page_pool_put_defragged_page(rq->page_pool, page, -1, true); + if (page_pool_defrag_page(page_to_netmem(page), drain_count) == 0) + page_pool_put_defragged_page(rq->page_pool, + page_to_netmem(page), -1, true); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, @@ -358,7 +359,7 @@ static int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe_cyc *wqe, frag->flags &= ~BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); headroom = i == 0 ? rq->buff.headroom : 0; - addr = page_pool_get_dma_addr(frag->frag_page->page); + addr = page_pool_get_dma_addr(page_to_netmem(frag->frag_page->page)); wqe->data[i].addr = cpu_to_be64(addr + frag->offset + headroom); } @@ -501,7 +502,8 @@ mlx5e_add_skb_shared_info_frag(struct mlx5e_rq *rq, struct skb_shared_info *sinf { skb_frag_t *frag; - dma_addr_t addr = page_pool_get_dma_addr(frag_page->page); + dma_addr_t addr = + page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); if (!xdp_buff_has_frags(xdp)) { @@ -526,7 +528,7 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb, struct page *page, u32 frag_offset, u32 len, unsigned int truesize) { - dma_addr_t addr = page_pool_get_dma_addr(page); + dma_addr_t addr = page_pool_get_dma_addr(page_to_netmem(page)); dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); @@ -674,7 +676,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_info->addr = addr; dma_info->frag_page = frag_page; @@ -786,7 +788,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err = mlx5e_page_alloc_fragmented(rq, frag_page); if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr(page_to_netmem(frag_page->page)); umr_wqe->inline_mtts[i] = (struct mlx5_mtt) { .ptag = cpu_to_be64(addr | MLX5_EN_WR), }; @@ -1685,7 +1687,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -1738,7 +1740,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi va = page_address(frag_page->page) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, rq->buff.frame0_sz, rq->buff.map_dir); net_prefetchw(va); /* xdp_frame data area */ @@ -2124,7 +2126,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w while (++pagep < frag_page); } /* copy header */ - addr = page_pool_get_dma_addr(head_page->page); + addr = page_pool_get_dma_addr(page_to_netmem(head_page->page)); mlx5e_copy_skb_header(rq, skb, head_page->page, addr, head_offset, head_offset, headlen); /* skb linear part was allocated with headlen and aligned to long */ @@ -2159,7 +2161,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr(page_to_netmem(frag_page->page)); dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset, frag_size, rq->buff.map_dir); net_prefetch(data); diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c index 3960534ac2ad..fdd4a9ccafd4 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c @@ -16,11 +16,12 @@ static struct page *lan966x_fdma_rx_alloc_page(struct lan966x_rx *rx, { struct page *page; - page = page_pool_dev_alloc_pages(rx->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rx->page_pool)); if (unlikely(!page)) return NULL; - db->dataptr = page_pool_get_dma_addr(page) + XDP_PACKET_HEADROOM; + db->dataptr = page_pool_get_dma_addr(page_to_netmem(page)) + + XDP_PACKET_HEADROOM; return page; } @@ -32,7 +33,8 @@ static void lan966x_fdma_rx_free_pages(struct lan966x_rx *rx) for (i = 0; i < FDMA_DCB_MAX; ++i) { for (j = 0; j < FDMA_RX_DCB_MAX_DBS; ++j) page_pool_put_full_page(rx->page_pool, - rx->page[i][j], false); + page_to_netmem(rx->page[i][j]), + false); } } @@ -44,7 +46,7 @@ static void lan966x_fdma_rx_free_page(struct lan966x_rx *rx) if (unlikely(!page)) return; - page_pool_recycle_direct(rx->page_pool, page); + page_pool_recycle_direct(rx->page_pool, page_to_netmem(page)); } static void lan966x_fdma_rx_add_dcb(struct lan966x_rx *rx, @@ -435,7 +437,7 @@ static void lan966x_fdma_tx_clear_buf(struct lan966x *lan966x, int weight) xdp_return_frame_bulk(dcb_buf->data.xdpf, &bq); else page_pool_recycle_direct(rx->page_pool, - dcb_buf->data.page); + page_to_netmem(dcb_buf->data.page)); } clear = true; @@ -537,7 +539,7 @@ static struct sk_buff *lan966x_fdma_rx_get_frame(struct lan966x_rx *rx, return skb; free_page: - page_pool_recycle_direct(rx->page_pool, page); + page_pool_recycle_direct(rx->page_pool, page_to_netmem(page)); return NULL; } @@ -765,7 +767,7 @@ int lan966x_fdma_xmit_xdpf(struct lan966x_port *port, void *ptr, u32 len) lan966x_ifh_set_bypass(ifh, 1); lan966x_ifh_set_port(ifh, BIT_ULL(port->chip_port)); - dma_addr = page_pool_get_dma_addr(page); + dma_addr = page_pool_get_dma_addr(page_to_netmem(page)); dma_sync_single_for_device(lan966x->dev, dma_addr + XDP_PACKET_HEADROOM, len + IFH_LEN_BYTES, diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index cb7b9d8ef618..7172041076d8 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1587,7 +1587,7 @@ static void mana_rx_skb(void *buf_va, bool from_pool, drop: if (from_pool) { page_pool_recycle_direct(rxq->page_pool, - virt_to_head_page(buf_va)); + page_to_netmem(virt_to_head_page(buf_va))); } else { WARN_ON_ONCE(rxq->xdp_save_va); /* Save for reuse */ @@ -1627,7 +1627,7 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, return NULL; } } else { - page = page_pool_dev_alloc_pages(rxq->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rxq->page_pool)); if (!page) return NULL; @@ -1639,7 +1639,8 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, DMA_FROM_DEVICE); if (dma_mapping_error(dev, *da)) { if (*from_pool) - page_pool_put_full_page(rxq->page_pool, page, false); + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(page), false); else put_page(virt_to_head_page(va)); @@ -2027,7 +2028,8 @@ static void mana_destroy_rxq(struct mana_port_context *apc, page = virt_to_head_page(rx_oob->buf_va); if (rx_oob->from_pool) - page_pool_put_full_page(rxq->page_pool, page, false); + page_pool_put_full_page(rxq->page_pool, + page_to_netmem(page), false); else put_page(page); diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 5ab8b81b84e6..a573d1dead67 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -739,7 +739,7 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv, struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; struct page *page; - page = page_pool_dev_alloc_pages(dring->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(dring->page_pool)); if (!page) return NULL; @@ -747,7 +747,8 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv, * page_pool API will map the whole page, skip what's needed for * network payloads and/or XDP */ - *dma_handle = page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM; + *dma_handle = page_pool_get_dma_addr(page_to_netmem(page)) + + NETSEC_RXBUF_HEADROOM; /* Make sure the incoming payload fits in the page for XDP and non-XDP * cases and reserve enough space for headroom + skb_shared_info */ @@ -862,8 +863,8 @@ static u32 netsec_xdp_queue_one(struct netsec_priv *priv, enum dma_data_direction dma_dir = page_pool_get_dma_dir(rx_ring->page_pool); - dma_handle = page_pool_get_dma_addr(page) + xdpf->headroom + - sizeof(*xdpf); + dma_handle = page_pool_get_dma_addr(page_to_netmem(page)) + + xdpf->headroom + sizeof(*xdpf); dma_sync_single_for_device(priv->dev, dma_handle, xdpf->len, dma_dir); tx_desc.buf_type = TYPE_NETSEC_XDP_TX; @@ -919,7 +920,8 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog, ret = netsec_xdp_xmit_back(priv, xdp); if (ret != NETSEC_XDP_TX) { page = virt_to_head_page(xdp->data); - page_pool_put_page(dring->page_pool, page, sync, true); + page_pool_put_page(dring->page_pool, + page_to_netmem(page), sync, true); } break; case XDP_REDIRECT: @@ -929,7 +931,8 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog, } else { ret = NETSEC_XDP_CONSUMED; page = virt_to_head_page(xdp->data); - page_pool_put_page(dring->page_pool, page, sync, true); + page_pool_put_page(dring->page_pool, + page_to_netmem(page), sync, true); } break; default: @@ -941,7 +944,8 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog, case XDP_DROP: ret = NETSEC_XDP_CONSUMED; page = virt_to_head_page(xdp->data); - page_pool_put_page(dring->page_pool, page, sync, true); + page_pool_put_page(dring->page_pool, page_to_netmem(page), sync, + true); break; } @@ -1038,8 +1042,8 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget) * cache state. Since we paid the allocation cost if * building an skb fails try to put the page into cache */ - page_pool_put_page(dring->page_pool, page, pkt_len, - true); + page_pool_put_page(dring->page_pool, + page_to_netmem(page), pkt_len, true); netif_err(priv, drv, priv->ndev, "rx failed to build skb\n"); break; @@ -1212,7 +1216,8 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id) if (id == NETSEC_RING_RX) { struct page *page = virt_to_page(desc->addr); - page_pool_put_full_page(dring->page_pool, page, false); + page_pool_put_full_page(dring->page_pool, + page_to_netmem(page), false); } else if (id == NETSEC_RING_TX) { dma_unmap_single(priv->dev, desc->dma_addr, desc->len, DMA_TO_DEVICE); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 47de466e432c..7680db4b54b6 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1455,25 +1455,29 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, gfp |= GFP_DMA32; if (!buf->page) { - buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->page = netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->page) return -ENOMEM; buf->page_offset = stmmac_rx_offset(priv); } if (priv->sph && !buf->sec_page) { - buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->sec_page = netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->sec_page) return -ENOMEM; - buf->sec_addr = page_pool_get_dma_addr(buf->sec_page); + buf->sec_addr = + page_pool_get_dma_addr(page_to_netmem(buf->sec_page)); stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true); } else { buf->sec_page = NULL; stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false); } - buf->addr = page_pool_get_dma_addr(buf->page) + buf->page_offset; + buf->addr = page_pool_get_dma_addr(page_to_netmem(buf->page)) + + buf->page_offset; stmmac_set_desc_addr(priv, p, buf->addr); if (dma_conf->dma_buf_sz == BUF_SIZE_16KiB) @@ -1495,11 +1499,13 @@ static void stmmac_free_rx_buffer(struct stmmac_priv *priv, struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; if (buf->page) - page_pool_put_full_page(rx_q->page_pool, buf->page, false); + page_pool_put_full_page(rx_q->page_pool, + page_to_netmem(buf->page), false); buf->page = NULL; if (buf->sec_page) - page_pool_put_full_page(rx_q->page_pool, buf->sec_page, false); + page_pool_put_full_page(rx_q->page_pool, + page_to_netmem(buf->sec_page), false); buf->sec_page = NULL; } @@ -4739,20 +4745,23 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) p = rx_q->dma_rx + entry; if (!buf->page) { - buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->page = netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->page) break; } if (priv->sph && !buf->sec_page) { - buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); + buf->sec_page = netmem_to_page(page_pool_alloc_pages(rx_q->page_pool, + gfp)); if (!buf->sec_page) break; - buf->sec_addr = page_pool_get_dma_addr(buf->sec_page); + buf->sec_addr = page_pool_get_dma_addr(page_to_netmem(buf->sec_page)); } - buf->addr = page_pool_get_dma_addr(buf->page) + buf->page_offset; + buf->addr = page_pool_get_dma_addr(page_to_netmem(buf->page)) + + buf->page_offset; stmmac_set_desc_addr(priv, p, buf->addr); if (priv->sph) @@ -4861,8 +4870,8 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue, } else { struct page *page = virt_to_page(xdpf->data); - dma_addr = page_pool_get_dma_addr(page) + sizeof(*xdpf) + - xdpf->headroom; + dma_addr = page_pool_get_dma_addr(page_to_netmem(page)) + + sizeof(*xdpf) + xdpf->headroom; dma_sync_single_for_device(priv->device, dma_addr, xdpf->len, DMA_BIDIRECTIONAL); @@ -5432,7 +5441,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) if (priv->extend_desc) stmmac_rx_extended_status(priv, &priv->xstats, rx_q->dma_erx + entry); if (unlikely(status == discard_frame)) { - page_pool_recycle_direct(rx_q->page_pool, buf->page); + page_pool_recycle_direct(rx_q->page_pool, + page_to_netmem(buf->page)); buf->page = NULL; error = 1; if (!priv->hwts_rx_en) @@ -5500,9 +5510,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) unsigned int xdp_res = -PTR_ERR(skb); if (xdp_res & STMMAC_XDP_CONSUMED) { - page_pool_put_page(rx_q->page_pool, - virt_to_head_page(ctx.xdp.data), - sync_len, true); + page_pool_put_page( + rx_q->page_pool, + page_to_netmem( + virt_to_head_page( + ctx.xdp.data)), + sync_len, true); buf->page = NULL; rx_dropped++; @@ -5543,7 +5556,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) skb_put(skb, buf1_len); /* Data payload copied into SKB, page ready for recycle */ - page_pool_recycle_direct(rx_q->page_pool, buf->page); + page_pool_recycle_direct(rx_q->page_pool, + page_to_netmem(buf->page)); buf->page = NULL; } else if (buf1_len) { dma_sync_single_for_cpu(priv->device, buf->addr, diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index ea85c6dd5484..ea9f1fe492e6 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -380,11 +380,11 @@ static void cpsw_rx_handler(void *token, int len, int status) } /* the interface is going down, pages are purged */ - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); return; } - new_page = page_pool_dev_alloc_pages(pool); + new_page = netmem_to_page(page_pool_dev_alloc_pages(pool)); if (unlikely(!new_page)) { new_page = page; ndev->stats.rx_dropped++; @@ -417,7 +417,7 @@ static void cpsw_rx_handler(void *token, int len, int status) skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); if (!skb) { ndev->stats.rx_dropped++; - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); goto requeue; } @@ -442,12 +442,13 @@ static void cpsw_rx_handler(void *token, int len, int status) xmeta->ndev = ndev; xmeta->ch = ch; - dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; + dma = page_pool_get_dma_addr(page_to_netmem(new_page)) + + CPSW_HEADROOM_NA; ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, pkt_size, 0); if (ret < 0) { WARN_ON(ret == -ENOMEM); - page_pool_recycle_direct(pool, new_page); + page_pool_recycle_direct(pool, page_to_netmem(new_page)); } } diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c index 498c50c6d1a7..d02b29aedddf 100644 --- a/drivers/net/ethernet/ti/cpsw_new.c +++ b/drivers/net/ethernet/ti/cpsw_new.c @@ -325,11 +325,11 @@ static void cpsw_rx_handler(void *token, int len, int status) } /* the interface is going down, pages are purged */ - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); return; } - new_page = page_pool_dev_alloc_pages(pool); + new_page = netmem_to_page(page_pool_dev_alloc_pages(pool)); if (unlikely(!new_page)) { new_page = page; ndev->stats.rx_dropped++; @@ -361,7 +361,7 @@ static void cpsw_rx_handler(void *token, int len, int status) skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); if (!skb) { ndev->stats.rx_dropped++; - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, page_to_netmem(page)); goto requeue; } @@ -387,12 +387,13 @@ static void cpsw_rx_handler(void *token, int len, int status) xmeta->ndev = ndev; xmeta->ch = ch; - dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; + dma = page_pool_get_dma_addr(page_to_netmem(new_page)) + + CPSW_HEADROOM_NA; ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, pkt_size, 0); if (ret < 0) { WARN_ON(ret == -ENOMEM); - page_pool_recycle_direct(pool, new_page); + page_pool_recycle_direct(pool, page_to_netmem(new_page)); } } diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c index 764ed298b570..222b2bd3dc47 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.c +++ b/drivers/net/ethernet/ti/cpsw_priv.c @@ -1113,7 +1113,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv) pool = cpsw->page_pool[ch]; ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); for (i = 0; i < ch_buf_num; i++) { - page = page_pool_dev_alloc_pages(pool); + page = netmem_to_page(page_pool_dev_alloc_pages(pool)); if (!page) { cpsw_err(priv, ifup, "allocate rx page err\n"); return -ENOMEM; @@ -1123,7 +1123,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv) xmeta->ndev = priv->ndev; xmeta->ch = ch; - dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM_NA; + dma = page_pool_get_dma_addr(page_to_netmem(page)) + + CPSW_HEADROOM_NA; ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, page, dma, cpsw->rx_packet_max, @@ -1132,7 +1133,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv) cpsw_err(priv, ifup, "cannot submit page to channel %d rx, error %d\n", ch, ret); - page_pool_recycle_direct(pool, page); + page_pool_recycle_direct(pool, + page_to_netmem(page)); return ret; } } @@ -1303,7 +1305,7 @@ int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf, txch = cpsw->txv[0].ch; if (page) { - dma = page_pool_get_dma_addr(page); + dma = page_pool_get_dma_addr(page_to_netmem(page)); dma += xdpf->headroom + sizeof(struct xdp_frame); ret = cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf), dma, xdpf->len, port); @@ -1379,7 +1381,7 @@ int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp, out: return ret; drop: - page_pool_recycle_direct(cpsw->page_pool[ch], page); + page_pool_recycle_direct(cpsw->page_pool[ch], page_to_netmem(page)); return ret; } diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c index a5a50b5a8816..57291cbf774b 100644 --- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c +++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c @@ -228,7 +228,8 @@ static void wx_dma_sync_frag(struct wx_ring *rx_ring, /* If the page was released, just unmap it. */ if (unlikely(WX_CB(skb)->page_released)) - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), false); } static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring, @@ -288,7 +289,9 @@ static void wx_put_rx_buffer(struct wx_ring *rx_ring, /* the page has been released from the ring */ WX_CB(skb)->page_released = true; else - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), + false); __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias); @@ -375,9 +378,9 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_ring, if (likely(page)) return true; - page = page_pool_dev_alloc_pages(rx_ring->page_pool); + page = netmem_to_page(page_pool_dev_alloc_pages(rx_ring->page_pool)); WARN_ON(!page); - dma = page_pool_get_dma_addr(page); + dma = page_pool_get_dma_addr(page_to_netmem(page)); bi->page_dma = dma; bi->page = page; @@ -2232,7 +2235,9 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring) struct sk_buff *skb = rx_buffer->skb; if (WX_CB(skb)->page_released) - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), + false); dev_kfree_skb(skb); } @@ -2247,7 +2252,8 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring) DMA_FROM_DEVICE); /* free resources associated with mapping */ - page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false); + page_pool_put_full_page(rx_ring->page_pool, + page_to_netmem(rx_buffer->page), false); __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias); diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 977861c46b1f..c93c199224da 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -781,8 +781,9 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq, size = min_t(u32, len, PAGE_SIZE); truesize = size; - page = page_pool_dev_alloc(rq->page_pool, &page_offset, - &truesize); + page = netmem_to_page(page_pool_dev_alloc(rq->page_pool, + &page_offset, + &truesize)); if (!page) { consume_skb(nskb); goto drop; diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 0578864792b6..063a5c2c948d 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1349,11 +1349,12 @@ vmxnet3_pp_get_buff(struct page_pool *pp, dma_addr_t *dma_addr, { struct page *page; - page = page_pool_alloc_pages(pp, gfp_mask | __GFP_NOWARN); + page = netmem_to_page(page_pool_alloc_pages(pp, + gfp_mask | __GFP_NOWARN)); if (unlikely(!page)) return NULL; - *dma_addr = page_pool_get_dma_addr(page) + pp->p.offset; + *dma_addr = page_pool_get_dma_addr(page_to_netmem(page)) + pp->p.offset; return page_address(page); } @@ -1931,7 +1932,7 @@ vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq, if (rxd->btype == VMXNET3_RXD_BTYPE_HEAD && rbi->page && rbi->buf_type == VMXNET3_RX_BUF_XDP) { page_pool_recycle_direct(rq->page_pool, - rbi->page); + page_to_netmem(rbi->page)); rbi->page = NULL; } else if (rxd->btype == VMXNET3_RXD_BTYPE_HEAD && rbi->skb) { diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c index 80ddaff759d4..71f3c278a960 100644 --- a/drivers/net/vmxnet3/vmxnet3_xdp.c +++ b/drivers/net/vmxnet3/vmxnet3_xdp.c @@ -147,7 +147,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter, tbi->map_type |= VMXNET3_MAP_SINGLE; } else { /* XDP buffer from page pool */ page = virt_to_page(xdpf->data); - tbi->dma_addr = page_pool_get_dma_addr(page) + + tbi->dma_addr = page_pool_get_dma_addr(page_to_netmem(page)) + VMXNET3_XDP_HEADROOM; dma_sync_single_for_device(&adapter->pdev->dev, tbi->dma_addr, buf_size, @@ -269,7 +269,8 @@ vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp_buff *xdp, rq->stats.xdp_redirects++; } else { rq->stats.xdp_drops++; - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, + page_to_netmem(page)); } return act; case XDP_TX: @@ -277,7 +278,8 @@ vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp_buff *xdp, if (unlikely(!xdpf || vmxnet3_xdp_xmit_back(rq->adapter, xdpf))) { rq->stats.xdp_drops++; - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, + page_to_netmem(page)); } else { rq->stats.xdp_tx++; } @@ -294,7 +296,7 @@ vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp_buff *xdp, break; } - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, page_to_netmem(page)); return act; } @@ -307,7 +309,7 @@ vmxnet3_build_skb(struct vmxnet3_rx_queue *rq, struct page *page, skb = build_skb(page_address(page), PAGE_SIZE); if (unlikely(!skb)) { - page_pool_recycle_direct(rq->page_pool, page); + page_pool_recycle_direct(rq->page_pool, page_to_netmem(page)); rq->stats.rx_buf_alloc_failure++; return NULL; } @@ -332,7 +334,7 @@ vmxnet3_process_xdp_small(struct vmxnet3_adapter *adapter, struct page *page; int act; - page = page_pool_alloc_pages(rq->page_pool, GFP_ATOMIC); + page = netmem_to_page(page_pool_alloc_pages(rq->page_pool, GFP_ATOMIC)); if (unlikely(!page)) { rq->stats.rx_buf_alloc_failure++; return XDP_DROP; @@ -381,9 +383,9 @@ vmxnet3_process_xdp(struct vmxnet3_adapter *adapter, page = rbi->page; dma_sync_single_for_cpu(&adapter->pdev->dev, - page_pool_get_dma_addr(page) + - rq->page_pool->p.offset, rcd->len, - page_pool_get_dma_dir(rq->page_pool)); + page_pool_get_dma_addr(page_to_netmem(page)) + + rq->page_pool->p.offset, + rcd->len, page_pool_get_dma_dir(rq->page_pool)); xdp_init_buff(&xdp, rbi->len, &rq->xdp_rxq); xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset, diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c index 511fe7e6e744..64972792fa4b 100644 --- a/drivers/net/wireless/mediatek/mt76/dma.c +++ b/drivers/net/wireless/mediatek/mt76/dma.c @@ -616,7 +616,9 @@ mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q, if (!buf) break; - addr = page_pool_get_dma_addr(virt_to_head_page(buf)) + offset; + addr = page_pool_get_dma_addr( + page_to_netmem(virt_to_head_page(buf))) + + offset; dir = page_pool_get_dma_dir(q->page_pool); dma_sync_single_for_device(dev->dma_dev, addr, len, dir); diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h index ea828ba0b83a..a559d870312a 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76.h +++ b/drivers/net/wireless/mediatek/mt76/mt76.h @@ -1565,7 +1565,7 @@ static inline void mt76_put_page_pool_buf(void *buf, bool allow_direct) { struct page *page = virt_to_head_page(buf); - page_pool_put_full_page(page->pp, page, allow_direct); + page_pool_put_full_page(page->pp, page_to_netmem(page), allow_direct); } static inline void * @@ -1573,7 +1573,8 @@ mt76_get_page_pool_buf(struct mt76_queue *q, u32 *offset, u32 size) { struct page *page; - page = page_pool_dev_alloc_frag(q->page_pool, offset, size); + page = netmem_to_page( + page_pool_dev_alloc_frag(q->page_pool, offset, size)); if (!page) return NULL; diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c index e7d8e03f826f..452d3018adc7 100644 --- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c +++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c @@ -616,7 +616,9 @@ static u32 mt7915_mmio_wed_init_rx_buf(struct mtk_wed_device *wed, int size) if (!buf) goto unmap; - addr = page_pool_get_dma_addr(virt_to_head_page(buf)) + offset; + addr = page_pool_get_dma_addr( + page_to_netmem(virt_to_head_page(buf))) + + offset; dir = page_pool_get_dma_dir(q->page_pool); dma_sync_single_for_device(dev->mt76.dma_dev, addr, len, dir); diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index ad29f370034e..2b07b56fde54 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -278,8 +278,8 @@ static struct sk_buff *xennet_alloc_one_rx_buffer(struct netfront_queue *queue) if (unlikely(!skb)) return NULL; - page = page_pool_alloc_pages(queue->page_pool, - GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO); + page = netmem_to_page(page_pool_alloc_pages( + queue->page_pool, GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO)); if (unlikely(!page)) { kfree_skb(skb); return NULL; diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 7dc65774cde5..153a3313562c 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -85,7 +85,7 @@ static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) * * Get a page from the page allocator or page_pool caches. */ -static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) +static inline struct netmem *page_pool_dev_alloc_pages(struct page_pool *pool) { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); @@ -103,18 +103,18 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) * Return: * Return allocated page fragment, otherwise return NULL. */ -static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, - unsigned int *offset, - unsigned int size) +static inline struct netmem *page_pool_dev_alloc_frag(struct page_pool *pool, + unsigned int *offset, + unsigned int size) { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); return page_pool_alloc_frag(pool, offset, size, gfp); } -static inline struct page *page_pool_alloc(struct page_pool *pool, - unsigned int *offset, - unsigned int *size, gfp_t gfp) +static inline struct netmem *page_pool_alloc(struct page_pool *pool, + unsigned int *offset, + unsigned int *size, gfp_t gfp) { unsigned int max_size = PAGE_SIZE << pool->p.order; struct page *page; @@ -125,7 +125,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool, return page_pool_alloc_pages(pool, gfp); } - page = page_pool_alloc_frag(pool, offset, *size, gfp); + page = netmem_to_page(page_pool_alloc_frag(pool, offset, *size, gfp)); if (unlikely(!page)) return NULL; @@ -138,7 +138,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool, pool->frag_offset = max_size; } - return page; + return page_to_netmem(page); } /** @@ -154,9 +154,9 @@ static inline struct page *page_pool_alloc(struct page_pool *pool, * Return: * Return allocated page or page fragment, otherwise return NULL. */ -static inline struct page *page_pool_dev_alloc(struct page_pool *pool, - unsigned int *offset, - unsigned int *size) +static inline struct netmem *page_pool_dev_alloc(struct page_pool *pool, + unsigned int *offset, + unsigned int *size) { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); @@ -170,7 +170,8 @@ static inline void *page_pool_alloc_va(struct page_pool *pool, struct page *page; /* Mask off __GFP_HIGHMEM to ensure we can use page_address() */ - page = page_pool_alloc(pool, &offset, size, gfp & ~__GFP_HIGHMEM); + page = netmem_to_page( + page_pool_alloc(pool, &offset, size, gfp & ~__GFP_HIGHMEM)); if (unlikely(!page)) return NULL; @@ -220,13 +221,14 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) * refcnt is 1 or return it back to the memory allocator and destroy any * mappings we have. */ -static inline void page_pool_fragment_page(struct page *page, long nr) +static inline void page_pool_fragment_page(struct netmem *netmem, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&netmem_to_page(netmem)->pp_frag_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_defrag_page(struct netmem *netmem, long nr) { + struct page *page = netmem_to_page(netmem); long ret; /* If nr == pp_frag_count then we have cleared all remaining @@ -269,16 +271,16 @@ static inline long page_pool_defrag_page(struct page *page, long nr) return ret; } -static inline bool page_pool_is_last_frag(struct page *page) +static inline bool page_pool_is_last_frag(struct netmem *netmem) { /* If page_pool_defrag_page() returns 0, we were the last user */ - return page_pool_defrag_page(page, 1) == 0; + return page_pool_defrag_page(netmem, 1) == 0; } /** * page_pool_put_page() - release a reference to a page pool page * @pool: pool from which page was allocated - * @page: page to release a reference on + * @netmem: netmem to release a reference on * @dma_sync_size: how much of the page may have been touched by the device * @allow_direct: released by the consumer, allow lockless caching * @@ -288,8 +290,7 @@ static inline bool page_pool_is_last_frag(struct page *page) * caches. If PP_FLAG_DMA_SYNC_DEV is set, the page will be synced for_device * using dma_sync_single_range_for_device(). */ -static inline void page_pool_put_page(struct page_pool *pool, - struct page *page, +static inline void page_pool_put_page(struct page_pool *pool, struct netmem *netmem, unsigned int dma_sync_size, bool allow_direct) { @@ -297,40 +298,40 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_frag(netmem)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_defragged_page(pool, netmem, dma_sync_size, allow_direct); #endif } /** * page_pool_put_full_page() - release a reference on a page pool page * @pool: pool from which page was allocated - * @page: page to release a reference on + * @netmem: netmem to release a reference on * @allow_direct: released by the consumer, allow lockless caching * * Similar to page_pool_put_page(), but will DMA sync the entire memory area * as configured in &page_pool_params.max_len. */ static inline void page_pool_put_full_page(struct page_pool *pool, - struct page *page, bool allow_direct) + struct netmem *netmem, bool allow_direct) { - page_pool_put_page(pool, page, -1, allow_direct); + page_pool_put_page(pool, netmem, -1, allow_direct); } /** * page_pool_recycle_direct() - release a reference on a page pool page * @pool: pool from which page was allocated - * @page: page to release a reference on + * @netmem: netmem to release a reference on * * Similar to page_pool_put_full_page() but caller must guarantee safe context * (e.g NAPI), since it will recycle the page directly into the pool fast cache. */ static inline void page_pool_recycle_direct(struct page_pool *pool, - struct page *page) + struct netmem *netmem) { - page_pool_put_full_page(pool, page, true); + page_pool_put_full_page(pool, netmem, true); } #define PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA \ @@ -347,19 +348,20 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, static inline void page_pool_free_va(struct page_pool *pool, void *va, bool allow_direct) { - page_pool_put_page(pool, virt_to_head_page(va), -1, allow_direct); + page_pool_put_page(pool, page_to_netmem(virt_to_head_page(va)), -1, + allow_direct); } /** * page_pool_get_dma_addr() - Retrieve the stored DMA address. - * @page: page allocated from a page pool + * @netmem: netmem allocated from a page pool * * Fetch the DMA address of the page. The page pool to which the page belongs * must had been created with PP_FLAG_DMA_MAP. */ -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t page_pool_get_dma_addr(struct netmem *netmem) { - dma_addr_t ret = page->dma_addr; + dma_addr_t ret = netmem_to_page(netmem)->dma_addr; if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) ret <<= PAGE_SHIFT; @@ -367,8 +369,10 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) return ret; } -static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +static inline bool page_pool_set_dma_addr(struct netmem *netmem, dma_addr_t addr) { + struct page *page = netmem_to_page(netmem); + if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { page->dma_addr = addr >> PAGE_SHIFT; diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index ac286ea8ce2d..0faa5207a394 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -6,6 +6,7 @@ #include #include #include +#include #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA * map/unmap @@ -199,9 +200,9 @@ struct page_pool { } user; }; -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); -struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, - unsigned int size, gfp_t gfp); +struct netmem *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); +struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, + unsigned int size, gfp_t gfp); struct page_pool *page_pool_create(const struct page_pool_params *params); struct xdp_mem_info; @@ -234,7 +235,7 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_page(struct page_pool *pool, struct netmem *netmem, unsigned int dma_sync_size, bool allow_direct); diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 711cf5d59816..32e3fbc17e65 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -296,7 +296,7 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp, struct bpf_prog *prog, xdp_set_return_frame_no_direct(); for (i = 0; i < batch_sz; i++) { - page = page_pool_dev_alloc_pages(xdp->pp); + page = netmem_to_page(page_pool_dev_alloc_pages(xdp->pp)); if (!page) { err = -ENOMEM; goto out; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c2e7c9a6efbe..e8ab7944e291 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -360,7 +360,7 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, struct page *page, unsigned int dma_sync_size) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr = page_pool_get_dma_addr(page_to_netmem(page)); dma_sync_size = min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -384,7 +384,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) if (dma_mapping_error(pool->p.dev, dma)) return false; - if (page_pool_set_dma_addr(page, dma)) + if (page_pool_set_dma_addr(page_to_netmem(page), dma)) goto unmap_failed; if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) @@ -412,7 +412,7 @@ static void page_pool_set_pp_info(struct page_pool *pool, * is dirtying the same cache line as the page->pp_magic above, so * the overhead is negligible. */ - page_pool_fragment_page(page, 1); + page_pool_fragment_page(page_to_netmem(page), 1); if (pool->has_init_callback) pool->slow.init_callback(page, pool->slow.init_arg); } @@ -509,18 +509,18 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* For using page_pool replace: alloc_pages() API calls, but provide * synchronization guarantee for allocation side. */ -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +struct netmem *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) { struct page *page; /* Fast-path: Get a page from cache */ page = __page_pool_get_cached(pool); if (page) - return page; + return page_to_netmem(page); /* Slow-path: cache empty, do real allocation */ page = __page_pool_alloc_pages_slow(pool, gfp); - return page; + return page_to_netmem(page); } EXPORT_SYMBOL(page_pool_alloc_pages); @@ -564,13 +564,13 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) */ goto skip_dma_unmap; - dma = page_pool_get_dma_addr(page); + dma = page_pool_get_dma_addr(page_to_netmem(page)); /* When page is unmapped, it cannot be returned to our pool */ dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); - page_pool_set_dma_addr(page, 0); + page_pool_set_dma_addr(page_to_netmem(page), 0); skip_dma_unmap: page_pool_clear_pp_info(page); @@ -677,9 +677,11 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, return NULL; } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_page(struct page_pool *pool, struct netmem *netmem, unsigned int dma_sync_size, bool allow_direct) { + struct page *page = netmem_to_page(netmem); + page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); if (page && !page_pool_recycle_in_ring(pool, page)) { /* Cache full, fallback to free pages */ @@ -714,7 +716,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, struct page *page = virt_to_head_page(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(page)) + if (!page_pool_is_last_frag(page_to_netmem(page))) continue; page = __page_pool_put_page(pool, page, -1, false); @@ -756,7 +758,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_defrag_page(page_to_netmem(page), drain_count))) return NULL; if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { @@ -777,15 +779,14 @@ static void page_pool_free_frag(struct page_pool *pool) pool->frag_page = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!page || page_pool_defrag_page(page_to_netmem(page), drain_count)) return; page_pool_return_page(pool, page); } -struct page *page_pool_alloc_frag(struct page_pool *pool, - unsigned int *offset, - unsigned int size, gfp_t gfp) +struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, + unsigned int size, gfp_t gfp) { unsigned int max_size = PAGE_SIZE << pool->p.order; struct page *page = pool->frag_page; @@ -805,7 +806,7 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, } if (!page) { - page = page_pool_alloc_pages(pool, gfp); + page = netmem_to_page(page_pool_alloc_pages(pool, gfp)); if (unlikely(!page)) { pool->frag_page = NULL; return NULL; @@ -817,14 +818,14 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, pool->frag_users = 1; *offset = 0; pool->frag_offset = size; - page_pool_fragment_page(page, BIAS_MAX); - return page; + page_pool_fragment_page(page_to_netmem(page), BIAS_MAX); + return page_to_netmem(page); } pool->frag_users++; pool->frag_offset = *offset + size; alloc_stat_inc(pool, fast); - return page; + return page_to_netmem(page); } EXPORT_SYMBOL(page_pool_alloc_frag); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index b157efea5dea..01509728a753 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -928,7 +928,7 @@ bool napi_pp_put_page(struct page *page, bool napi_safe) * The page will be returned to the pool here regardless of the * 'flipped' fragment being in use or not. */ - page_pool_put_full_page(pp, page, allow_direct); + page_pool_put_full_page(pp, page_to_netmem(page), allow_direct); return true; } diff --git a/net/core/xdp.c b/net/core/xdp.c index b6f1d6dab3f2..681294eee763 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -387,7 +387,8 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) * as mem->type knows this a page_pool page */ - page_pool_put_full_page(page->pp, page, napi_direct); + page_pool_put_full_page(page->pp, page_to_netmem(page), + napi_direct); break; case MEM_TYPE_PAGE_SHARED: page_frag_free(data);