From patchwork Fri Dec 8 00:52:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 175495 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp5163254vqy; Thu, 7 Dec 2023 16:53:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IFQhMJW9ZZSfOL8hoRhbuPuC1g1A6Deg/D0ULofSpnvHaKyp0CdBjtx0Hi7LBYDNDVCk0uy X-Received: by 2002:a17:90a:c901:b0:286:6cc0:cadd with SMTP id v1-20020a17090ac90100b002866cc0caddmr3270187pjt.84.1701996807586; Thu, 07 Dec 2023 16:53:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701996807; cv=none; d=google.com; s=arc-20160816; b=0BVlJwMqbdCGvYCWxTpe8j3LKM4F0RPAK9XpysgOE6aucIIiIaPnLsCzsP0toQ6J2j 71s0gh/VwuUFKhCKPXLyjggkYI9Rx9iZPDLUA1HNMlfF6Pq8lGo9U5C/JxgA6qooccGB 3F/o61MSYgs6dwdjRBuyvj6E6bNFMXOzWMGfcb3U3SyZ/DSD2s0NZ3/i2y2fOnqEDDe1 2iFba2NDEUryDPDSFioAxp3QKOykrJtyd3MnY8u5NAwa9RJOrnwR5Xve2ge1fyLjscKO ZdbuL31LP7pKTRcsbXUjJCFpX8GBUa5TRxwwtmi2w74ugKxZ5HnyxmfwTHe03p+4FYJd HIVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=nhWZE+QRUt+Z8lRUloVgvBF4+Y7IPBx9Q6LUmHNzq1c=; fh=spYIC3Yl69sWtCLqpfRLMGcxeD5ncKplqDHTlWXfkQk=; b=BYktvO5k2+CHJM+aY+6N332gWmIS8OIPpET98o2TO+B38fRdmpjameAfviVIUocUDA ms5kSxjOlUEoYk5H6KtkXijcnw1FObG5v0vhvSP6IUuSYknqXE3iX1Um5W9udSv51gtk XvyLVKAfovttwspbCVSBzARA3MzdgiE3CSEObh/IW6uwa+glwcSwKMZMHkj6nMe6fPUY TJ1AcP0tzpSPE7hHBoj2pqOW/TaJSl2B+f8NwA9ERgc4Dvdg8qq289wElsLS6b8e1inQ RGJmiezSJBbrstC5eRWkWTW4Ei+ZAkDkYjeQqVU+2/wLipFeREOsj3aYeVc8X18oxClP o5oA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=rkcfCe2C; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id h16-20020a17090aa89000b002869b4e7d1bsi657232pjq.96.2023.12.07.16.53.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 16:53:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=rkcfCe2C; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id A06A4853A0E0; Thu, 7 Dec 2023 16:53:21 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1444215AbjLHAxJ (ORCPT + 99 others); Thu, 7 Dec 2023 19:53:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229531AbjLHAxF (ORCPT ); Thu, 7 Dec 2023 19:53:05 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06643172A for ; Thu, 7 Dec 2023 16:53:04 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dbc4f389835so332527276.2 for ; Thu, 07 Dec 2023 16:53:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701996783; x=1702601583; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nhWZE+QRUt+Z8lRUloVgvBF4+Y7IPBx9Q6LUmHNzq1c=; b=rkcfCe2CbupevsyPtC8BxPibduKqF2oeMzVt2UEQVdP/TefH3ZDW9KbjAvoYgtpFUK tDm4J2vHNiRK2BwBUgNhW3uh+sZ8mG+VUdVxGON87ERVbiD39qwODXJCj+/v2yscUmo5 qYdph509uT4HkUoLQh/lp7e0iCBqASHsy9wLVZBRIQsSvqnN++gEhVNqfp9P+CusVvPc 2BogCbkV+yFTMbTr7Fxrro3WS7/J9HD2dqK9BUOgeBlrondIMi7NAwPbHZ66rRq9djGj v6N5IBewPpAHC5/TwOz5aJPkweA1XuUU9ynjnXlcOddJeydoQASnYakLJmlxNGnyiJNb sZFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701996783; x=1702601583; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nhWZE+QRUt+Z8lRUloVgvBF4+Y7IPBx9Q6LUmHNzq1c=; b=wRdexUZJ6aHuiZnwcJ3wRSzFCFZ86QFhp+YXRTGVAU2wKUk4Lm9k1M4E2RLAVwQuy7 9ds/ASgmBZ8MXCrtExtNDWnjQc1dI+DAO2w4MdWG8O4j8iT0eZnJYpN+uC5MG0G/+PZC r5Pw2aTs4L3hZg6X0M2clSOz5md8s5c95J/j3Km2dLAw69Vz7E1FjyL0CWnBlUmcSNAk sfYao4GGTqP2dpGeTsMYH2fVw1lEn+L0TIwkZFOoTHxAkcN363xUzkdEGRHUFCzKi6lJ GNi5LEECpaRnrf1Dc2Xb+Uf7cJfR+qx8kRG0RBJhZ7dI7AClTkUGtb14jC1rVpY/oFqv Q+wg== X-Gm-Message-State: AOJu0YxOCKPB2nHbA1Jn+ylpV3KFTpyjUDe+Y5NlH9JQcJ2V/Au6m7Y3 gWpN9uqmx/IGCty3qi2Nc/l6IQVR83XWO1qlBg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:f1cf:c733:235b:9fff]) (user=almasrymina job=sendgmr) by 2002:a25:e097:0:b0:db3:fa34:50b0 with SMTP id x145-20020a25e097000000b00db3fa3450b0mr38904ybg.4.1701996783271; Thu, 07 Dec 2023 16:53:03 -0800 (PST) Date: Thu, 7 Dec 2023 16:52:35 -0800 In-Reply-To: <20231208005250.2910004-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231208005250.2910004-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231208005250.2910004-5-almasrymina@google.com> Subject: [net-next v1 04/16] gve: implement queue api From: Mina Almasry To: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 07 Dec 2023 16:53:21 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784673004260581447 X-GMAIL-MSGID: 1784673004260581447 Define a struct that contains all of the memory needed for an RX queue to function. Implement the queue-api in GVE using this struct. Currently the only memory is allocated at the time of queue start are the RX pages in gve_rx_post_buffers_dqo(). That can be moved up to queue_mem_alloc() time in the future. For simplicity the queue API is only supported by the diorite queue out-of-order (DQO) format without queue-page-lists (QPL). Support for other GVE formats can be added in the future as well. Signed-off-by: Mina Almasry --- drivers/net/ethernet/google/gve/gve_adminq.c | 6 +- drivers/net/ethernet/google/gve/gve_adminq.h | 3 + drivers/net/ethernet/google/gve/gve_dqo.h | 2 + drivers/net/ethernet/google/gve/gve_main.c | 286 +++++++++++++++++++ drivers/net/ethernet/google/gve/gve_rx_dqo.c | 5 +- 5 files changed, 296 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 12fbd723ecc6..e515b7278295 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -348,7 +348,7 @@ static int gve_adminq_parse_err(struct gve_priv *priv, u32 status) /* Flushes all AQ commands currently queued and waits for them to complete. * If there are failures, it will return the first error. */ -static int gve_adminq_kick_and_wait(struct gve_priv *priv) +int gve_adminq_kick_and_wait(struct gve_priv *priv) { int tail, head; int i; @@ -591,7 +591,7 @@ int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_que return gve_adminq_kick_and_wait(priv); } -static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) +int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) { struct gve_rx_ring *rx = &priv->rx[queue_index]; union gve_adminq_command cmd; @@ -691,7 +691,7 @@ int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_qu return gve_adminq_kick_and_wait(priv); } -static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index) +int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index) { union gve_adminq_command cmd; int err; diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 5865ccdccbd0..265beed965dc 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -411,6 +411,7 @@ union gve_adminq_command { static_assert(sizeof(union gve_adminq_command) == 64); +int gve_adminq_kick_and_wait(struct gve_priv *priv); int gve_adminq_alloc(struct device *dev, struct gve_priv *priv); void gve_adminq_free(struct device *dev, struct gve_priv *priv); void gve_adminq_release(struct gve_priv *priv); @@ -424,7 +425,9 @@ int gve_adminq_deconfigure_device_resources(struct gve_priv *priv); int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues); +int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index); int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id); +int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_id); int gve_adminq_register_page_list(struct gve_priv *priv, struct gve_queue_page_list *qpl); int gve_adminq_unregister_page_list(struct gve_priv *priv, u32 page_list_id); diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index c36b93f0de15..3eed26a0ed7d 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -46,6 +46,8 @@ int gve_clean_tx_done_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, struct napi_struct *napi); void gve_rx_post_buffers_dqo(struct gve_rx_ring *rx); void gve_rx_write_doorbell_dqo(const struct gve_priv *priv, int queue_idx); +void gve_free_page_dqo(struct gve_priv *priv, struct gve_rx_buf_state_dqo *bs, + bool free_page); static inline void gve_tx_put_doorbell_dqo(const struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 619bf63ec935..5b23d811afd3 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -22,6 +22,7 @@ #include "gve_dqo.h" #include "gve_adminq.h" #include "gve_register.h" +#include "gve_utils.h" #define GVE_DEFAULT_RX_COPYBREAK (256) @@ -1702,6 +1703,287 @@ static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } +struct gve_per_rx_queue_mem_dqo { + struct gve_rx_buf_state_dqo *buf_states; + u32 num_buf_states; + + struct gve_rx_compl_desc_dqo *complq_desc_ring; + dma_addr_t complq_bus; + + struct gve_rx_desc_dqo *bufq_desc_ring; + dma_addr_t bufq_bus; + + struct gve_queue_resources *q_resources; + dma_addr_t q_resources_bus; + + size_t completion_queue_slots; + size_t buffer_queue_slots; +}; + +static int gve_rx_queue_stop(struct net_device *dev, int idx, + void **out_per_q_mem) +{ + struct gve_per_rx_queue_mem_dqo *per_q_mem; + struct gve_priv *priv = netdev_priv(dev); + struct gve_notify_block *block; + struct gve_rx_ring *rx; + int ntfy_idx; + int err; + + rx = &priv->rx[idx]; + ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); + block = &priv->ntfy_blocks[ntfy_idx]; + + if (priv->queue_format != GVE_DQO_RDA_FORMAT) + return -EOPNOTSUPP; + + if (!out_per_q_mem) + return -EINVAL; + + /* Stopping queue 0 while other queues are running is unfortunately + * fails silently for GVE at the moment. Disable the queue-api for + * queue 0 until this is resolved. + */ + if (idx == 0) + return -ERANGE; + + per_q_mem = kvcalloc(1, sizeof(*per_q_mem), GFP_KERNEL); + if (!per_q_mem) + return -ENOMEM; + + napi_disable(&block->napi); + err = gve_adminq_destroy_rx_queue(priv, idx); + if (err) + goto err_napi_enable; + + err = gve_adminq_kick_and_wait(priv); + if (err) + goto err_create_rx_queue; + + gve_remove_napi(priv, ntfy_idx); + + per_q_mem->buf_states = rx->dqo.buf_states; + per_q_mem->num_buf_states = rx->dqo.num_buf_states; + + per_q_mem->complq_desc_ring = rx->dqo.complq.desc_ring; + per_q_mem->complq_bus = rx->dqo.complq.bus; + + per_q_mem->bufq_desc_ring = rx->dqo.bufq.desc_ring; + per_q_mem->bufq_bus = rx->dqo.bufq.bus; + + per_q_mem->q_resources = rx->q_resources; + per_q_mem->q_resources_bus = rx->q_resources_bus; + + per_q_mem->buffer_queue_slots = rx->dqo.bufq.mask + 1; + per_q_mem->completion_queue_slots = rx->dqo.complq.mask + 1; + + *out_per_q_mem = per_q_mem; + + return 0; + +err_create_rx_queue: + /* There is nothing we can do here if these fail. */ + gve_adminq_create_rx_queue(priv, idx); + gve_adminq_kick_and_wait(priv); + +err_napi_enable: + napi_enable(&block->napi); + kvfree(per_q_mem); + + return err; +} + +static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem) +{ + struct gve_per_rx_queue_mem_dqo *gve_q_mem; + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_buf_state_dqo *bs; + struct device *hdev; + size_t size; + int i; + + priv = netdev_priv(dev); + gve_q_mem = (struct gve_per_rx_queue_mem_dqo *)per_q_mem; + hdev = &priv->pdev->dev; + + if (!gve_q_mem) + return; + + if (priv->queue_format != GVE_DQO_RDA_FORMAT) + return; + + for (i = 0; i < gve_q_mem->num_buf_states; i++) { + bs = &gve_q_mem->buf_states[i]; + if (bs->page_info.page) + gve_free_page_dqo(priv, bs, true); + } + + if (gve_q_mem->q_resources) + dma_free_coherent(hdev, sizeof(*gve_q_mem->q_resources), + gve_q_mem->q_resources, + gve_q_mem->q_resources_bus); + + if (gve_q_mem->bufq_desc_ring) { + size = sizeof(gve_q_mem->bufq_desc_ring[0]) * + gve_q_mem->buffer_queue_slots; + dma_free_coherent(hdev, size, gve_q_mem->bufq_desc_ring, + gve_q_mem->bufq_bus); + } + + if (gve_q_mem->complq_desc_ring) { + size = sizeof(gve_q_mem->complq_desc_ring[0]) * + gve_q_mem->completion_queue_slots; + dma_free_coherent(hdev, size, gve_q_mem->complq_desc_ring, + gve_q_mem->complq_bus); + } + + kvfree(gve_q_mem->buf_states); + + kvfree(per_q_mem); +} + +static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx) +{ + struct gve_per_rx_queue_mem_dqo *gve_q_mem; + struct gve_priv *priv = netdev_priv(dev); + struct device *hdev = &priv->pdev->dev; + size_t size; + + if (priv->queue_format != GVE_DQO_RDA_FORMAT) + return NULL; + + /* See comment in gve_rx_queue_stop() */ + if (idx == 0) + return NULL; + + gve_q_mem = kvcalloc(1, sizeof(*gve_q_mem), GFP_KERNEL); + if (!gve_q_mem) + goto err; + + gve_q_mem->buffer_queue_slots = + priv->options_dqo_rda.rx_buff_ring_entries; + gve_q_mem->completion_queue_slots = priv->rx_desc_cnt; + + gve_q_mem->num_buf_states = + min_t(s16, S16_MAX, gve_q_mem->buffer_queue_slots * 4); + + gve_q_mem->buf_states = kvcalloc(gve_q_mem->num_buf_states, + sizeof(gve_q_mem->buf_states[0]), + GFP_KERNEL); + if (!gve_q_mem->buf_states) + goto err; + + size = sizeof(struct gve_rx_compl_desc_dqo) * + gve_q_mem->completion_queue_slots; + gve_q_mem->complq_desc_ring = dma_alloc_coherent(hdev, size, + &gve_q_mem->complq_bus, + GFP_KERNEL); + if (!gve_q_mem->complq_desc_ring) + goto err; + + size = sizeof(struct gve_rx_desc_dqo) * gve_q_mem->buffer_queue_slots; + gve_q_mem->bufq_desc_ring = dma_alloc_coherent(hdev, size, + &gve_q_mem->bufq_bus, + GFP_KERNEL); + if (!gve_q_mem->bufq_desc_ring) + goto err; + + gve_q_mem->q_resources = dma_alloc_coherent(hdev, + sizeof(*gve_q_mem->q_resources), + &gve_q_mem->q_resources_bus, + GFP_KERNEL); + if (!gve_q_mem->q_resources) + goto err; + + return gve_q_mem; + +err: + gve_rx_queue_mem_free(dev, gve_q_mem); + return NULL; +} + +static int gve_rx_queue_start(struct net_device *dev, int idx, void *per_q_mem) +{ + struct gve_per_rx_queue_mem_dqo *gve_q_mem; + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_ring *rx = &priv->rx[idx]; + struct gve_notify_block *block; + int ntfy_idx; + int err; + int i; + + if (priv->queue_format != GVE_DQO_RDA_FORMAT) + return -EOPNOTSUPP; + + /* See comment in gve_rx_queue_stop() */ + if (idx == 0) + return -ERANGE; + + gve_q_mem = (struct gve_per_rx_queue_mem_dqo *)per_q_mem; + ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); + block = &priv->ntfy_blocks[ntfy_idx]; + + netif_dbg(priv, drv, priv->dev, "starting rx ring DQO\n"); + + memset(rx, 0, sizeof(*rx)); + rx->gve = priv; + rx->q_num = idx; + rx->dqo.bufq.mask = gve_q_mem->buffer_queue_slots - 1; + rx->dqo.complq.num_free_slots = gve_q_mem->completion_queue_slots; + rx->dqo.complq.mask = gve_q_mem->completion_queue_slots - 1; + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; + + rx->dqo.num_buf_states = gve_q_mem->num_buf_states; + + rx->dqo.buf_states = gve_q_mem->buf_states; + + /* Set up linked list of buffer IDs */ + for (i = 0; i < rx->dqo.num_buf_states - 1; i++) + rx->dqo.buf_states[i].next = i + 1; + + rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1; + rx->dqo.recycled_buf_states.head = -1; + rx->dqo.recycled_buf_states.tail = -1; + rx->dqo.used_buf_states.head = -1; + rx->dqo.used_buf_states.tail = -1; + + rx->dqo.complq.desc_ring = gve_q_mem->complq_desc_ring; + rx->dqo.complq.bus = gve_q_mem->complq_bus; + + rx->dqo.bufq.desc_ring = gve_q_mem->bufq_desc_ring; + rx->dqo.bufq.bus = gve_q_mem->bufq_bus; + + rx->q_resources = gve_q_mem->q_resources; + rx->q_resources_bus = gve_q_mem->q_resources_bus; + + gve_rx_add_to_block(priv, idx); + + err = gve_adminq_create_rx_queue(priv, idx); + if (err) + return err; + + err = gve_adminq_kick_and_wait(priv); + if (err) + goto err_destroy_rx_queue; + + /* TODO, pull the memory allocations in this to gve_rx_queue_mem_alloc() + */ + gve_rx_post_buffers_dqo(&priv->rx[idx]); + + napi_enable(&block->napi); + gve_set_itr_coalesce_usecs_dqo(priv, block, priv->rx_coalesce_usecs); + + return 0; + +err_destroy_rx_queue: + /* There is nothing we can do if these fail. */ + gve_adminq_destroy_rx_queue(priv, idx); + gve_adminq_kick_and_wait(priv); + + return err; +} + int gve_adjust_queues(struct gve_priv *priv, struct gve_queue_config new_rx_config, struct gve_queue_config new_tx_config) @@ -1900,6 +2182,10 @@ static const struct net_device_ops gve_netdev_ops = { .ndo_bpf = gve_xdp, .ndo_xdp_xmit = gve_xdp_xmit, .ndo_xsk_wakeup = gve_xsk_wakeup, + .ndo_queue_mem_alloc = gve_rx_queue_mem_alloc, + .ndo_queue_mem_free = gve_rx_queue_mem_free, + .ndo_queue_start = gve_rx_queue_start, + .ndo_queue_stop = gve_rx_queue_stop, }; static void gve_handle_status(struct gve_priv *priv, u32 status) diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index f281e42a7ef9..e729f04d3f60 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -21,9 +21,8 @@ static int gve_buf_ref_cnt(struct gve_rx_buf_state_dqo *bs) return page_count(bs->page_info.page) - bs->page_info.pagecnt_bias; } -static void gve_free_page_dqo(struct gve_priv *priv, - struct gve_rx_buf_state_dqo *bs, - bool free_page) +void gve_free_page_dqo(struct gve_priv *priv, struct gve_rx_buf_state_dqo *bs, + bool free_page) { page_ref_sub(bs->page_info.page, bs->page_info.pagecnt_bias - 1); if (free_page)