From patchwork Wed Dec 13 07:11:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: deepakx.nagaraju@intel.com X-Patchwork-Id: 177801 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp7596667dys; Tue, 12 Dec 2023 23:11:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IGpmrWOqbgwMG8xLBdPx0KBnDBLTBKpPMEjA6VbINByfbMJSlaAuxGZKw94T60hcxULBiSX X-Received: by 2002:a05:6a00:98e:b0:6ce:2731:47c3 with SMTP id u14-20020a056a00098e00b006ce273147c3mr7070003pfg.35.1702451505100; Tue, 12 Dec 2023 23:11:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702451505; cv=none; d=google.com; s=arc-20160816; b=ozvIx5nQoXwygClFx4DX/MI73OadBK/3htLDrOoFfa/Ryof3TVx+rWAs+ONWT+R3Yo Pv/FuUXXRe2HS64XnzZAacDJUUQaxR9WOZPkMvdSpZ7hmSVILCQNPDWOwj1rPj93eC+6 vbkVvo2qI2Q+lN+xxlJ1JkIUv1M3t8q8XH+qdgpbyAZ546jWuaDp+3tj6oPDyWJ9Leqr A8BKLJnZ6u2C6hbNMZHlsoQPdKWSTXcO5o6HDEVq7jZmPuX0PWUi1jwuPDfYLISDpVO2 karhW1ynvVLdUyE7CKdbon3tDMEZRVeQ1YCHrnAggr3MjP26eW3p+hPh4rvEKqgWTPOU Xf1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=F1rU9W07PMkJpBtRW+zCUWgxm60d6suqyXvl6/pKuaM=; fh=qIx/pc4sS4Eh6LMVP4W42BHGMmClOFtShAh2UGbIMO0=; b=GNfEevb5YZc6PauA7C3pzqIvmUkCipOQKks4Hr3H1LtcTfZmSaaBoZvMDj+MXHaPHF InT/dc9cYSh+YTnaBKWN981r89c97zGYXNBk6NqtUAi9Dcvva3AcySj68irPIULDjGgL Dy2LnBMezUkjD/OvC2iF59gjPbN87NIMgR89j9rHoWx00rKowD8E8oJUdZ10bVD1rzn+ gK9e8s96GGLtf9eaix/UCo88Yvn8QLoYy1STixp5GLjUMF5lUZwjLeRXkuwrf1KPG7Fl 2736xrnykS/r3PLxcN9OcYsIhfxynMRQWDSlansjEM/NOA8wSf5I9SHQICjDS6fiUhWc WJDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Hr76UOKK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id cb22-20020a056a02071600b005c6763c095asi9493506pgb.331.2023.12.12.23.11.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 23:11:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Hr76UOKK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id A2A1B804A606; Tue, 12 Dec 2023 23:11:40 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233344AbjLMHLa (ORCPT + 99 others); Wed, 13 Dec 2023 02:11:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229551AbjLMHLU (ORCPT ); Wed, 13 Dec 2023 02:11:20 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BEDFAD; Tue, 12 Dec 2023 23:11:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702451486; x=1733987486; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mpmTezQGrve2/NrxamBbNPU/OAxLkPtY5ZqRUCbQMQs=; b=Hr76UOKKDGvOn/rQ8TTNKyObpEX6WjDNJAXtZqNcwuLEmwDCPMUr2w8i lt0Kzgb+ms82g7DTWqBGuqpCJLPNwTzgZNhN9N0WDfrnfz/FmUBYkOy8S LADe8fDDV57aQhspUExGwN30h0D8KXW/k2eahtRfuex5pvSd7l6+s1nyB dZGfXi+2Pq+qDny7DnIMQmCgEFFgUFEtF1ch+OeSrpWKgirnIAVXdIbwH 59UmPcGFRYqOcRmtaVA5JSfHtZRGo5CS8UKXc0NvTe4dmEvkF7zP9YJGs jDSB1M8XPJLEdo/P7tOYDUN7sd31qHva52Sbn1sKIQeVrGPAaTgSFy0oy A==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="481126095" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="481126095" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 23:11:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="767109214" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="767109214" Received: from pglc00021.png.intel.com ([10.221.207.41]) by orsmga007.jf.intel.com with ESMTP; 12 Dec 2023 23:11:23 -0800 From: To: joyce.ooi@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Nagaraju DeepakX , Andy Schevchenko Subject: [PATCH 1/5] net: ethernet: altera: remove unneeded assignments Date: Wed, 13 Dec 2023 15:11:08 +0800 Message-Id: <20231213071112.18242-2-deepakx.nagaraju@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20231213071112.18242-1-deepakx.nagaraju@intel.com> References: <20231213071112.18242-1-deepakx.nagaraju@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Tue, 12 Dec 2023 23:11:40 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785149789232313857 X-GMAIL-MSGID: 1785149789232313857 From: Nagaraju DeepakX Remove unneeded assignments in the code. Signed-off-by: Nagaraju DeepakX Reviewed-by: Andy Schevchenko Reviewed-by: Simon Horman --- drivers/net/ethernet/altera/altera_sgdma.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) -- 2.26.2 diff --git a/drivers/net/ethernet/altera/altera_sgdma.c b/drivers/net/ethernet/altera/altera_sgdma.c index 7f247ccbe6ba..5517f89f1ef9 100644 --- a/drivers/net/ethernet/altera/altera_sgdma.c +++ b/drivers/net/ethernet/altera/altera_sgdma.c @@ -63,8 +63,6 @@ int sgdma_initialize(struct altera_tse_private *priv) INIT_LIST_HEAD(&priv->txlisthd); INIT_LIST_HEAD(&priv->rxlisthd); - priv->rxdescphys = (dma_addr_t) 0; - priv->txdescphys = (dma_addr_t) 0; priv->rxdescphys = dma_map_single(priv->device, (void __force *)priv->rx_dma_desc, @@ -237,8 +235,8 @@ u32 sgdma_rx_status(struct altera_tse_private *priv) desc = &base[0]; if (sts & SGDMA_STSREG_EOP) { - unsigned int pktlength = 0; - unsigned int pktstatus = 0; + unsigned int pktlength; + unsigned int pktstatus; dma_sync_single_for_cpu(priv->device, priv->rxdescphys, SGDMA_DESC_LEN, From patchwork Wed Dec 13 07:11:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: deepakx.nagaraju@intel.com X-Patchwork-Id: 177806 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp7599384dys; Tue, 12 Dec 2023 23:19:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IEMaMNlIIIJO8aigkOOdGUiozRb8T/wgErvskOOBfGwyNGwTwrwG4dyAcBCxt7e+4PK+Hhp X-Received: by 2002:a17:90a:cb92:b0:286:c54a:a1fd with SMTP id a18-20020a17090acb9200b00286c54aa1fdmr3395284pju.46.1702451939906; Tue, 12 Dec 2023 23:18:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702451939; cv=none; d=google.com; s=arc-20160816; b=ptVUrpwVS5sIY50pr6hBZRfWE32uxDESESsHvvlrsvHalM6PrqhBR9yVnBr3Q62/jN FDAIYDClXq5cVCWacHF2YmKnu3zR6s6wuPZAmF18rrsNIpE45M/s6iSUOxFZW65dzX8B kxeL8rPYTzgQt2phNv7pBHTODidQd1t/NYI4MkvJjn86xK3xyrPhGBUfHopUXiSqsd2S EMivjCNWkQisjE1eQwW35elWolbm/QB2p65g9di0vEM99Fo8LZU4mAZCtCMcX4gR5EWn FwpyazxiSFuSEOV8mcu5iuN3WdT0zZBA3hEeXCz41v3/dlyl5uunjPR8XMIKmuuw6nGE A0UQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SCByZ9VVaub2suP3DqMc366KtGgAL5TTHg0lzIk5K6A=; fh=qIx/pc4sS4Eh6LMVP4W42BHGMmClOFtShAh2UGbIMO0=; b=psJ8WYbJr4hc5Jp+GY7NTWnB6WOehtwdvx//czbECZqof2KiHdV45jRpt49mBxET7p JQirCTzFOeXRK/EPlMVrFqlP1sTKMnq5tCPa6rK0i8L7r13LUwnkpvl1bTv4ei7VBOBQ UIg2KxDQ0u9sd5NBAhCZ0mciJHIulgfRbyTuF6OinSipTd+D5kypgl8ZgN2Og4UaFcjU F98gFHlvs6LagZPIPz4Xs6aeQ7kXK1btUp6MXVKC2ph5IaAX3GCgXpt5L5Ut7y3N2VyK 6oSA69/Y55l42VDYTxHwC4kXjl6gs3BS8qQgungF7LbSc3OdXGrRUqABOlG5PvZaqNIL J4cQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OCgbRyQ2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id l3-20020a17090a72c300b0028868ea872esi9234203pjk.43.2023.12.12.23.18.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 23:18:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OCgbRyQ2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 13E5681F841E; Tue, 12 Dec 2023 23:18:57 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233358AbjLMHSo (ORCPT + 99 others); Wed, 13 Dec 2023 02:18:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233383AbjLMHLY (ORCPT ); Wed, 13 Dec 2023 02:11:24 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A20CAD; Tue, 12 Dec 2023 23:11:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702451491; x=1733987491; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Tov8wWJ5efMQB+MBw6ZRhy90dLcQ3ea0IO1FuVFBmws=; b=OCgbRyQ2HqRzZxlmRB2tlnMpjiBJiNytS/J9EYaa7M96Rl+HLhHAfOM6 bP0oSO5l3IHzEjPGkE8oBxw9oXxueeKLgNGKNiBoMa6N1pMC6toQh97vy ndSE/G5DIxKD+6P+ON0fyiM9+0oL+a41BNJzpXhJWkGA+Vmh+y6SqvW8c v16CZYncxAvEo1EER/2lUawExw6JIcZ25N43EotklYbEUUnwuZL9eU4kQ POf2GAkaRlcRydYduiyFS8tC90+hfq0PwuW+0GWpeUinmLJAL/pxuEvid wXdL4TPrN01ia2M24s+LRcMqFp8LHFgx7BBQO/6Z8BsixPXaX41B4fPpB Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="481126106" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="481126106" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 23:11:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="767109231" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="767109231" Received: from pglc00021.png.intel.com ([10.221.207.41]) by orsmga007.jf.intel.com with ESMTP; 12 Dec 2023 23:11:28 -0800 From: To: joyce.ooi@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Nagaraju DeepakX , Andy Schevchenko Subject: [PATCH 2/5] net: ethernet: altera: fix indentation warnings Date: Wed, 13 Dec 2023 15:11:09 +0800 Message-Id: <20231213071112.18242-3-deepakx.nagaraju@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20231213071112.18242-1-deepakx.nagaraju@intel.com> References: <20231213071112.18242-1-deepakx.nagaraju@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Tue, 12 Dec 2023 23:18:57 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785150245724374927 X-GMAIL-MSGID: 1785150245724374927 From: Nagaraju DeepakX Fix indentation issues such as missing a blank line after declarations and alignment issues. Signed-off-by: Nagaraju DeepakX Reviewed-by: Andy Schevchenko --- drivers/net/ethernet/altera/altera_sgdma.c | 22 +++++++++---------- drivers/net/ethernet/altera/altera_tse_main.c | 9 +++----- 2 files changed, 14 insertions(+), 17 deletions(-) -- 2.26.2 diff --git a/drivers/net/ethernet/altera/altera_sgdma.c b/drivers/net/ethernet/altera/altera_sgdma.c index 5517f89f1ef9..d4edfb3e09e8 100644 --- a/drivers/net/ethernet/altera/altera_sgdma.c +++ b/drivers/net/ethernet/altera/altera_sgdma.c @@ -20,7 +20,7 @@ static void sgdma_setup_descrip(struct sgdma_descrip __iomem *desc, int wfixed); static int sgdma_async_write(struct altera_tse_private *priv, - struct sgdma_descrip __iomem *desc); + struct sgdma_descrip __iomem *desc); static int sgdma_async_read(struct altera_tse_private *priv); @@ -63,7 +63,6 @@ int sgdma_initialize(struct altera_tse_private *priv) INIT_LIST_HEAD(&priv->txlisthd); INIT_LIST_HEAD(&priv->rxlisthd); - priv->rxdescphys = dma_map_single(priv->device, (void __force *)priv->rx_dma_desc, priv->rxdescmem, DMA_BIDIRECTIONAL); @@ -192,9 +191,7 @@ int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) return 1; } - -/* tx_lock held to protect access to queued tx list - */ +/* tx_lock held to protect access to queued tx list */ u32 sgdma_tx_completions(struct altera_tse_private *priv) { u32 ready = 0; @@ -237,10 +234,9 @@ u32 sgdma_rx_status(struct altera_tse_private *priv) if (sts & SGDMA_STSREG_EOP) { unsigned int pktlength; unsigned int pktstatus; - dma_sync_single_for_cpu(priv->device, - priv->rxdescphys, - SGDMA_DESC_LEN, - DMA_FROM_DEVICE); + + dma_sync_single_for_cpu(priv->device, priv->rxdescphys, + SGDMA_DESC_LEN, DMA_FROM_DEVICE); pktlength = csrrd16(desc, sgdma_descroffs(bytes_xferred)); pktstatus = csrrd8(desc, sgdma_descroffs(status)); @@ -286,7 +282,6 @@ u32 sgdma_rx_status(struct altera_tse_private *priv) return rxstatus; } - /* Private functions */ static void sgdma_setup_descrip(struct sgdma_descrip __iomem *desc, struct sgdma_descrip __iomem *ndesc, @@ -301,6 +296,7 @@ static void sgdma_setup_descrip(struct sgdma_descrip __iomem *desc, /* Clear the next descriptor as not owned by hardware */ u32 ctrl = csrrd8(ndesc, sgdma_descroffs(control)); + ctrl &= ~SGDMA_CONTROL_HW_OWNED; csrwr8(ctrl, ndesc, sgdma_descroffs(control)); @@ -406,6 +402,7 @@ sgdma_txphysaddr(struct altera_tse_private *priv, { dma_addr_t paddr = priv->txdescmem_busaddr; uintptr_t offs = (uintptr_t)desc - (uintptr_t)priv->tx_dma_desc; + return (dma_addr_t)((uintptr_t)paddr + offs); } @@ -415,6 +412,7 @@ sgdma_rxphysaddr(struct altera_tse_private *priv, { dma_addr_t paddr = priv->rxdescmem_busaddr; uintptr_t offs = (uintptr_t)desc - (uintptr_t)priv->rx_dma_desc; + return (dma_addr_t)((uintptr_t)paddr + offs); } @@ -445,7 +443,6 @@ queue_tx(struct altera_tse_private *priv, struct tse_buffer *buffer) list_add_tail(&buffer->lh, &priv->txlisthd); } - /* adds a tse_buffer to the tail of a rx buffer list * assumes the caller is managing and holding a mutual exclusion * primitive to avoid simultaneous pushes/pops to the list. @@ -465,6 +462,7 @@ static struct tse_buffer * dequeue_tx(struct altera_tse_private *priv) { struct tse_buffer *buffer = NULL; + list_remove_head(&priv->txlisthd, buffer, struct tse_buffer, lh); return buffer; } @@ -478,6 +476,7 @@ static struct tse_buffer * dequeue_rx(struct altera_tse_private *priv) { struct tse_buffer *buffer = NULL; + list_remove_head(&priv->rxlisthd, buffer, struct tse_buffer, lh); return buffer; } @@ -492,6 +491,7 @@ static struct tse_buffer * queue_rx_peekhead(struct altera_tse_private *priv) { struct tse_buffer *buffer = NULL; + list_peek_head(&priv->rxlisthd, buffer, struct tse_buffer, lh); return buffer; } diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c index 1c8763be0e4b..6a1a004ea693 100644 --- a/drivers/net/ethernet/altera/altera_tse_main.c +++ b/drivers/net/ethernet/altera/altera_tse_main.c @@ -258,14 +258,12 @@ static int alloc_init_skbufs(struct altera_tse_private *priv) int i; /* Create Rx ring buffer */ - priv->rx_ring = kcalloc(rx_descs, sizeof(struct tse_buffer), - GFP_KERNEL); + priv->rx_ring = kcalloc(rx_descs, sizeof(struct tse_buffer), GFP_KERNEL); if (!priv->rx_ring) goto err_rx_ring; /* Create Tx ring buffer */ - priv->tx_ring = kcalloc(tx_descs, sizeof(struct tse_buffer), - GFP_KERNEL); + priv->tx_ring = kcalloc(tx_descs, sizeof(struct tse_buffer), GFP_KERNEL); if (!priv->tx_ring) goto err_tx_ring; @@ -319,8 +317,7 @@ static inline void tse_rx_refill(struct altera_tse_private *priv) unsigned int entry; int ret; - for (; priv->rx_cons - priv->rx_prod > 0; - priv->rx_prod++) { + for (; priv->rx_cons - priv->rx_prod > 0; priv->rx_prod++) { entry = priv->rx_prod % rxsize; if (likely(priv->rx_ring[entry].skb == NULL)) { ret = tse_init_rx_buffer(priv, &priv->rx_ring[entry], From patchwork Wed Dec 13 07:11:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: deepakx.nagaraju@intel.com X-Patchwork-Id: 177802 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp7596684dys; Tue, 12 Dec 2023 23:11:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IFrRG0SMczx9JYyK9cxLQ0A7MNFGJiXCWerGwsTrskV2RQIKgc67sAUxEZEjAGaEv1t5mBS X-Received: by 2002:a05:6870:6786:b0:1fb:75b:12ed with SMTP id gc6-20020a056870678600b001fb075b12edmr10471567oab.63.1702451509552; Tue, 12 Dec 2023 23:11:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702451509; cv=none; d=google.com; s=arc-20160816; b=Rv7TnupM9PN79Px+O76zJAhcVBEbKkjlcJOjc5omq640kesLbVGmfqSCqVmTr/CqDA aq17i9yB5mlCwgwjrMZMniiTlbwcbvv03j2aAWb7TokxOHc0dfgNAmIUuXa2ax7Vu9pA yLoowd4ixUl6YB6HupjJibTGyhkP2heMrsUGY7zEDAsxIrKE0zi+IILWyRslh7qyMCGh p00MMl4hfZYqfvSPaZjULXBqshlCil2kHjNUM/H1RmruJfeXbgyXOuWQvSnvu/7spRaV k9Ub9754Bhvm16VuTQliIX8vPatDAkK/eA8heF/TejExR+oSmfJlwqtKW+7IOsHfCosv 4Q/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4zSZVxLq0y8rQeVex6u83GeZcdDU4lL0Hw9DlNqd4Vs=; fh=qIx/pc4sS4Eh6LMVP4W42BHGMmClOFtShAh2UGbIMO0=; b=TB6NMazh9bIv0aFoLwVq6CN64xEywAJK+7haOBVyZ6qxCkivWRAMRlnN/GNEdto/ql BAkNxXipMCQtggmcElczHahDcCva6iMS9mokkHuIh8+a/+gg41YkIfxyHYdOuSeiB0WL uHSlozA0PM4fNDl6KT9KhAPuO3wI6WaGEBBRDcpZL+pGDNxFx9H5c5zqewbN4PYrTCEv Naj01ftL8nk3hQI/k1T06Ak/4Ddav+TVo3JEc43cScwRlYVU5dnuOJYRmMHySAoFf8vF 5G9Lxd2/+spr88R+O+Vn09DYouTQPzYKfDkNASX9WiO29SNK7A+LPz1glLfEAcHrdNLd G7yA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=EBczVnXu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id k1-20020a63d841000000b005bdbe64cc26si8837417pgj.535.2023.12.12.23.11.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 23:11:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=EBczVnXu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 6D9FD80699C9; Tue, 12 Dec 2023 23:11:48 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233008AbjLMHLd (ORCPT + 99 others); Wed, 13 Dec 2023 02:11:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229801AbjLMHLb (ORCPT ); Wed, 13 Dec 2023 02:11:31 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA778AD; Tue, 12 Dec 2023 23:11:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702451497; x=1733987497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iRYMqNrXpASNSJnnd3fLhZI1BbmDICDIkbjwXeUvc5w=; b=EBczVnXuDkxWV8iClo7SDRiKLUbehE0FSDV5f5Fm4PJLVDmb/86t50AS 0yHKmPEEFFSev9MwT8BY3m/zO+9w0ZmAg7Rd5PI40GnfhcNM+HeAnkeHO mcbCu49E5vFxwNy4ltteiFYonHm5R0alGbH39i4yCIBuNgimdG3RENOt2 nOMo80pBLrUpR6TVc9bRXagl6GDOfSo/hOJmBZ4fXzWP8c9xlonkf06pP eccPIy0CYz3aNQCI5CyPYFE+FeOgnn7uvCWJ3K4Afu/nATIQd1GNKNGpC Mizr+ilVLpyAEq+IPxP+gOChBvMg9XTKkohLb5R8h9Q+Tj78y89fJkWa/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="481126118" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="481126118" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 23:11:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="767109274" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="767109274" Received: from pglc00021.png.intel.com ([10.221.207.41]) by orsmga007.jf.intel.com with ESMTP; 12 Dec 2023 23:11:34 -0800 From: To: joyce.ooi@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Nagaraju DeepakX , Andy Schevchenko Subject: [PATCH 3/5] net: ethernet: altera: move read write functions Date: Wed, 13 Dec 2023 15:11:10 +0800 Message-Id: <20231213071112.18242-4-deepakx.nagaraju@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20231213071112.18242-1-deepakx.nagaraju@intel.com> References: <20231213071112.18242-1-deepakx.nagaraju@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 12 Dec 2023 23:11:48 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785149793972554359 X-GMAIL-MSGID: 1785149793972554359 From: Nagaraju DeepakX Move read write functions from altera_tse.h to altera_utils.h so it can be shared with future altera ethernet IP's. Signed-off-by: Nagaraju DeepakX Reviewed-by: Andy Schevchenko Reviewed-by: Simon Horman --- drivers/net/ethernet/altera/altera_tse.h | 45 ------------------- .../net/ethernet/altera/altera_tse_ethtool.c | 1 + drivers/net/ethernet/altera/altera_utils.h | 43 ++++++++++++++++++ 3 files changed, 44 insertions(+), 45 deletions(-) -- 2.26.2 diff --git a/drivers/net/ethernet/altera/altera_tse.h b/drivers/net/ethernet/altera/altera_tse.h index 82f2363a45cd..4874139e7cdf 100644 --- a/drivers/net/ethernet/altera/altera_tse.h +++ b/drivers/net/ethernet/altera/altera_tse.h @@ -483,49 +483,4 @@ struct altera_tse_private { */ void altera_tse_set_ethtool_ops(struct net_device *); -static inline -u32 csrrd32(void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - return readl(paddr); -} - -static inline -u16 csrrd16(void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - return readw(paddr); -} - -static inline -u8 csrrd8(void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - return readb(paddr); -} - -static inline -void csrwr32(u32 val, void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - - writel(val, paddr); -} - -static inline -void csrwr16(u16 val, void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - - writew(val, paddr); -} - -static inline -void csrwr8(u8 val, void __iomem *mac, size_t offs) -{ - void __iomem *paddr = (void __iomem *)((uintptr_t)mac + offs); - - writeb(val, paddr); -} - #endif /* __ALTERA_TSE_H__ */ diff --git a/drivers/net/ethernet/altera/altera_tse_ethtool.c b/drivers/net/ethernet/altera/altera_tse_ethtool.c index 81313c85833e..d34373bac94a 100644 --- a/drivers/net/ethernet/altera/altera_tse_ethtool.c +++ b/drivers/net/ethernet/altera/altera_tse_ethtool.c @@ -22,6 +22,7 @@ #include #include "altera_tse.h" +#include "altera_utils.h" #define TSE_STATS_LEN 31 #define TSE_NUM_REGS 128 diff --git a/drivers/net/ethernet/altera/altera_utils.h b/drivers/net/ethernet/altera/altera_utils.h index 3c2e32fb7389..c3f09c5257f7 100644 --- a/drivers/net/ethernet/altera/altera_utils.h +++ b/drivers/net/ethernet/altera/altera_utils.h @@ -7,6 +7,7 @@ #define __ALTERA_UTILS_H__ #include +#include #include void tse_set_bit(void __iomem *ioaddr, size_t offs, u32 bit_mask); @@ -14,4 +15,46 @@ void tse_clear_bit(void __iomem *ioaddr, size_t offs, u32 bit_mask); int tse_bit_is_set(void __iomem *ioaddr, size_t offs, u32 bit_mask); int tse_bit_is_clear(void __iomem *ioaddr, size_t offs, u32 bit_mask); +static inline u32 csrrd32(void __iomem *mac, size_t offs) +{ + void __iomem *paddr = mac + offs; + + return readl(paddr); +} + +static inline u16 csrrd16(void __iomem *mac, size_t offs) +{ + void __iomem *paddr = mac + offs; + + return readw(paddr); +} + +static inline u8 csrrd8(void __iomem *mac, size_t offs) +{ + void __iomem *paddr = mac + offs; + + return readb(paddr); +} + +static inline void csrwr32(u32 val, void __iomem *mac, size_t offs) +{ + void __iomem *paddr = mac + offs; + + writel(val, paddr); +} + +static inline void csrwr16(u16 val, void __iomem *mac, size_t offs) +{ + void __iomem *paddr = mac + offs; + + writew(val, paddr); +} + +static inline void csrwr8(u8 val, void __iomem *mac, size_t offs) +{ + void __iomem *paddr = mac + offs; + + writeb(val, paddr); +} + #endif /* __ALTERA_UTILS_H__*/ From patchwork Wed Dec 13 07:11:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: deepakx.nagaraju@intel.com X-Patchwork-Id: 177803 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp7596709dys; Tue, 12 Dec 2023 23:11:54 -0800 (PST) X-Google-Smtp-Source: AGHT+IH8RHxtY7HR4bHQB7qrUUMydmfRxCOzt3GA32ULuHo09l0fc6iu5SNwd2F6RIKsF9GOiUCm X-Received: by 2002:a05:6a20:a125:b0:190:490b:39da with SMTP id q37-20020a056a20a12500b00190490b39damr10019700pzk.59.1702451514224; Tue, 12 Dec 2023 23:11:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702451514; cv=none; d=google.com; s=arc-20160816; b=zeSuDJFpELwKKTX2i+OYvrxTqwC4LI+WNPYZY2Arn/nm6/OvXViQhnTC8pAUI+XB6g hl70IJSVgx5rY1tu+bbyNxTLf2JUm1zF+JV3OsTRiuri1DF7a9HevmgI7ybGc8BoNmr5 WjEz0m9mzPJiNeRhLYfgy3FLupWs/ZpFDblu2e/SEg6OFkdFfNaI/Q/tTb0OoBJIcTBk O76GZfquS3KOZvP9a3U7yb0wpAA0pGCJPU9sxsl4Wlri8IqycXDoP92MdxIC/jOw/fpO A5QrfB9T0CUhbCiVSld6+lspwetUXtWD8LTG2n5VoZxTHq55cvc8QXTujTT9DUmEDBIm aVmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xNs4MxW/e2aA+ENAA3RJYT33w+5uWNOssBaEtmzEyZk=; fh=qIx/pc4sS4Eh6LMVP4W42BHGMmClOFtShAh2UGbIMO0=; b=f82PIzMa9OrnVCmF4Xh3uBavnJ+v5eNpA5CvTW8RjG7PqWwAu0CNhRYfnuPyw/g12o lv76C8qajYIHl+0JpHER5N7B6M5ULw+YGBcYYeTLy746GcpIVdWqmgtmxXqXWKaJLnoH CbvRjgQneXYO0+7txBcm2TmDXPs+8vuwnNLWqO9b5u8O7pziF1aqdEonSFBaEYBl129Z olVwNmPFmmUuWsb6RIf4uo+VjQhPTrgS9H6Yw5gJcO7angS0dwR+rVUVh2CWZMnfMyQL maVkKOc4YkhjDVgYW4jJGe/waG742325RUHAVmIgNsvXSa9LGE8onJO1j95g45mgzJ46 TRaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JdGd8sy9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id m5-20020a632605000000b005c688ff3468si9368655pgm.579.2023.12.12.23.11.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 23:11:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JdGd8sy9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 51FE3806500B; Tue, 12 Dec 2023 23:11:53 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233008AbjLMHLn (ORCPT + 99 others); Wed, 13 Dec 2023 02:11:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233262AbjLMHLl (ORCPT ); Wed, 13 Dec 2023 02:11:41 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C6B0D0; Tue, 12 Dec 2023 23:11:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702451508; x=1733987508; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=g6PgbnsP5OlwWC7MeRXRAc2NRStgzMWs90zB3eGbJZU=; b=JdGd8sy9RCXz7ZIm/vdr6cdjQvL9HZWm3fp7I31iJUWaBo2mVnZ/gFcK zstQVgPcCaReiDlnRzsyCuvH3N4rgb2+EbYho4rQBM1bs7NLHq+meXl7Q oyCwSeRVpqXUAfWqicAlOWLuEtrerJfF5Gdp4SmWPmSn2VahpintuST/x ep6FGguYld998iQXqN3/mrP0Keg5JPrgy7nsXhAM9OlLwEiRrdjuhmybG FpWsiZOAzjmtocuy3/aGkBQm1F4PlAffZ06UZi/6DHLpzpkPq8HawJa3Q /symCT0c/KyGSk1QqxsTN5+AAs/ctzsMowbwXw3WYWFgDJrmBJcCvke6f g==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="481126159" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="481126159" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 23:11:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="767109308" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="767109308" Received: from pglc00021.png.intel.com ([10.221.207.41]) by orsmga007.jf.intel.com with ESMTP; 12 Dec 2023 23:11:43 -0800 From: To: joyce.ooi@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Nagaraju DeepakX , Andy Schevchenko Subject: [PATCH 4/5] net: ethernet: altera: sorting headers in alphabetical order Date: Wed, 13 Dec 2023 15:11:11 +0800 Message-Id: <20231213071112.18242-5-deepakx.nagaraju@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20231213071112.18242-1-deepakx.nagaraju@intel.com> References: <20231213071112.18242-1-deepakx.nagaraju@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 12 Dec 2023 23:11:53 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785149798903795052 X-GMAIL-MSGID: 1785149798903795052 From: Nagaraju DeepakX Re-arrange the headers in alphabetical order and add empty lines in between of groups of headers for easier maintenance. Signed-off-by: Nagaraju DeepakX Reviewed-by: Andy Schevchenko --- drivers/net/ethernet/altera/altera_msgdma.c | 7 ++++--- drivers/net/ethernet/altera/altera_sgdma.c | 7 ++++--- drivers/net/ethernet/altera/altera_tse_main.c | 7 ++++--- 3 files changed, 12 insertions(+), 9 deletions(-) -- 2.26.2 diff --git a/drivers/net/ethernet/altera/altera_msgdma.c b/drivers/net/ethernet/altera/altera_msgdma.c index ac1efd08267a..9581c05a5449 100644 --- a/drivers/net/ethernet/altera/altera_msgdma.c +++ b/drivers/net/ethernet/altera/altera_msgdma.c @@ -4,10 +4,11 @@ */ #include -#include "altera_utils.h" -#include "altera_tse.h" -#include "altera_msgdmahw.h" + #include "altera_msgdma.h" +#include "altera_msgdmahw.h" +#include "altera_tse.h" +#include "altera_utils.h" /* No initialization work to do for MSGDMA */ int msgdma_initialize(struct altera_tse_private *priv) diff --git a/drivers/net/ethernet/altera/altera_sgdma.c b/drivers/net/ethernet/altera/altera_sgdma.c index d4edfb3e09e8..f6c9904c88d0 100644 --- a/drivers/net/ethernet/altera/altera_sgdma.c +++ b/drivers/net/ethernet/altera/altera_sgdma.c @@ -4,10 +4,11 @@ */ #include -#include "altera_utils.h" -#include "altera_tse.h" -#include "altera_sgdmahw.h" + #include "altera_sgdma.h" +#include "altera_sgdmahw.h" +#include "altera_tse.h" +#include "altera_utils.h" static void sgdma_setup_descrip(struct sgdma_descrip __iomem *desc, struct sgdma_descrip __iomem *ndesc, diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c index 6a1a004ea693..f98810eac44f 100644 --- a/drivers/net/ethernet/altera/altera_tse_main.c +++ b/drivers/net/ethernet/altera/altera_tse_main.c @@ -38,12 +38,13 @@ #include #include #include + #include -#include "altera_utils.h" -#include "altera_tse.h" -#include "altera_sgdma.h" #include "altera_msgdma.h" +#include "altera_sgdma.h" +#include "altera_tse.h" +#include "altera_utils.h" static atomic_t instance_count = ATOMIC_INIT(~0); /* Module parameters */ From patchwork Wed Dec 13 07:11:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: deepakx.nagaraju@intel.com X-Patchwork-Id: 177804 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp7596823dys; Tue, 12 Dec 2023 23:12:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IFsndPn6Tldvqc1M0zPFGj3steMS7xXhRmJzzg+gereZFxomkbh7Hld294tljPeXEvbaOTM X-Received: by 2002:a05:6e02:20c4:b0:35d:7a4f:fcd1 with SMTP id 4-20020a056e0220c400b0035d7a4ffcd1mr8172180ilq.23.1702451533793; Tue, 12 Dec 2023 23:12:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702451533; cv=none; d=google.com; s=arc-20160816; b=HOClvYz6IFBSWTYKJGK8aq6abMKLXTEagYHLuodluebcnNYnJumIFtmKeg3XwTmh7n q4IfId28mjHdYKwN6SbiidneSRk245lCrohSWeS3zbeIB0um7ENwTanRzYuc5mMh0qFZ /8l+Z6gG4r5MMxyJbPcKgg4/Nryfx5uZ7ZvuyRcHRHP1fi2u8LdBB5ZWyb3nB7cZiTjq eiYmoZ/yolmDxek3RjVr81xA2Bet6YDXp9+IIk8zzE6EPcgmg+DMcRqU13bOtd/jgFMa ok1Z2QgdUQiMlZWKmTn3RUMSyBg56WhGPufZuU+tBV/vOh8HvzmxnBx4Mtsiql+qIVPG 8LPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sTHyFIFcJ5AyUTkKegPqLCWmYghkd4Tdrz+wDZZi4M8=; fh=qIx/pc4sS4Eh6LMVP4W42BHGMmClOFtShAh2UGbIMO0=; b=MKLCl6LR3w6KkdIgdLqfYvY5/9Z6YHVx3vuCGk94eLs5UrO2FxsNRVz86Txy2IxDsn PRL/ftCWrdh+AnrrhCILTuhoCrlW+H7Rs72WQZa3jh4ZFT6UPqoE4Uc1kmIvMa4b4Zk5 wDazgjZjQg6BjmAsEQvA6pRbYM2+rw0p/3UsaKRg21hpLVLdWpkUX8gB31yzC5D0R5qE rEGA3jLq4aC8qRxS45VU5X8v4TBBwstztx8599Yb96I4wlh/zFtwgvqVNL7V/LJPkwUk OAV6emYQhiNJ50jb0WGP8wTH+SJww9nog4LWQBaPX5cOJGK47cH+cH+VftsfOSqekCev u74A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IsdgvQcy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id i8-20020a17090332c800b001d3485f631csi1185679plr.305.2023.12.12.23.12.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 23:12:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IsdgvQcy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 6C10180C7ACD; Tue, 12 Dec 2023 23:12:06 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229551AbjLMHL5 (ORCPT + 99 others); Wed, 13 Dec 2023 02:11:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235251AbjLMHLx (ORCPT ); Wed, 13 Dec 2023 02:11:53 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4542110F; Tue, 12 Dec 2023 23:11:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702451512; x=1733987512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pzYk1VdG64gXpC9ULh3gtuM7+hCTmwYta4irut1g10U=; b=IsdgvQcyub120tH6e/A2P6zqNkIq2YOX0B7bj5ASL1vwZ/+bjPjewvbZ VxfGTPKxwPdRBwN9JGnCh43UVGnxhiDCOsOYVZ65faeNag023uIKWEU2R UfyIvRyMpp0wnCWsD+M3yQEJne5VQaG9xKmunSi4grSSLCjUISEk94nZV CVY9kP1e3RbZ5aRuU2JSnlT5DeiEwq3v+frJFpNwVqk93Cy0+CW0ZRY8Y SanOxPjjWxLWcvhPp63sMFrdwGacoxb5BuS9iAHamBUxoR7lG1ITw71BB w3olrvj3N7ZAP64tmbgQKIktOvtX1iWbid10n3hCF26tPwDpWNh8nRKjo Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="481126177" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="481126177" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2023 23:11:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="767109318" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="767109318" Received: from pglc00021.png.intel.com ([10.221.207.41]) by orsmga007.jf.intel.com with ESMTP; 12 Dec 2023 23:11:47 -0800 From: To: joyce.ooi@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Nagaraju DeepakX , Andy Schevchenko Subject: [PATCH 5/5] net: ethernet: altera: rename functions and their prototypes Date: Wed, 13 Dec 2023 15:11:12 +0800 Message-Id: <20231213071112.18242-6-deepakx.nagaraju@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20231213071112.18242-1-deepakx.nagaraju@intel.com> References: <20231213071112.18242-1-deepakx.nagaraju@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 12 Dec 2023 23:12:06 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785149819285781815 X-GMAIL-MSGID: 1785149819285781815 From: Nagaraju DeepakX Move standard DMA interface for sgdma and msgdma and rename them from tse_private to dma_private. Signed-off-by: Nagaraju DeepakX Reviewed-by: Andy Schevchenko --- drivers/net/ethernet/altera/Makefile | 5 +- drivers/net/ethernet/altera/altera_eth_dma.c | 58 +++++ drivers/net/ethernet/altera/altera_eth_dma.h | 121 +++++++++ drivers/net/ethernet/altera/altera_msgdma.c | 31 +-- drivers/net/ethernet/altera/altera_msgdma.h | 28 +- drivers/net/ethernet/altera/altera_sgdma.c | 105 ++++---- drivers/net/ethernet/altera/altera_sgdma.h | 30 +-- drivers/net/ethernet/altera/altera_tse.h | 26 +- .../net/ethernet/altera/altera_tse_ethtool.c | 1 + drivers/net/ethernet/altera/altera_tse_main.c | 241 +++++++----------- drivers/net/ethernet/altera/altera_utils.c | 1 + 11 files changed, 379 insertions(+), 268 deletions(-) create mode 100644 drivers/net/ethernet/altera/altera_eth_dma.c create mode 100644 drivers/net/ethernet/altera/altera_eth_dma.h -- 2.26.2 diff --git a/drivers/net/ethernet/altera/Makefile b/drivers/net/ethernet/altera/Makefile index a52db80aee9f..ce723832edc4 100644 --- a/drivers/net/ethernet/altera/Makefile +++ b/drivers/net/ethernet/altera/Makefile @@ -3,6 +3,9 @@ # Makefile for the Altera device drivers. # +ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=NET_ALTERA + obj-$(CONFIG_ALTERA_TSE) += altera_tse.o altera_tse-objs := altera_tse_main.o altera_tse_ethtool.o \ -altera_msgdma.o altera_sgdma.o altera_utils.o + altera_msgdma.o altera_sgdma.o altera_utils.o \ + altera_eth_dma.o diff --git a/drivers/net/ethernet/altera/altera_eth_dma.c b/drivers/net/ethernet/altera/altera_eth_dma.c new file mode 100644 index 000000000000..6a47a3cb3406 --- /dev/null +++ b/drivers/net/ethernet/altera/altera_eth_dma.c @@ -0,0 +1,58 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* DMA support for Intel FPGA Quad-Speed Ethernet MAC driver + * Copyright (C) 2023 Intel Corporation. All rights reserved + */ + +#include +#include +#include +#include + +#include "altera_eth_dma.h" +#include "altera_utils.h" + +/* Probe DMA */ +int altera_eth_dma_probe(struct platform_device *pdev, struct altera_dma_private *priv, + enum altera_dma_type type) +{ + void __iomem *descmap; + + /* xSGDMA Rx Dispatcher address space */ + priv->rx_dma_csr = devm_platform_ioremap_resource_byname(pdev, "rx_csr"); + if (IS_ERR(priv->rx_dma_csr)) + return PTR_ERR(priv->rx_dma_csr); + + /* mSGDMA Tx Dispatcher address space */ + priv->tx_dma_csr = devm_platform_ioremap_resource_byname(pdev, "tx_csr"); + if (IS_ERR(priv->rx_dma_csr)) + return PTR_ERR(priv->rx_dma_csr); + + switch (type) { + case ALTERA_DTYPE_SGDMA: + /* Get the mapped address to the SGDMA descriptor memory */ + descmap = devm_platform_ioremap_resource_byname(pdev, "s1"); + if (IS_ERR(descmap)) + return PTR_ERR(descmap); + break; + case ALTERA_DTYPE_MSGDMA: + priv->rx_dma_resp = devm_platform_ioremap_resource_byname(pdev, "rx_resp"); + if (IS_ERR(priv->rx_dma_resp)) + return PTR_ERR(priv->rx_dma_resp); + + priv->tx_dma_desc = devm_platform_ioremap_resource_byname(pdev, "tx_desc"); + if (IS_ERR(priv->tx_dma_desc)) + return PTR_ERR(priv->tx_dma_desc); + + priv->rx_dma_desc = devm_platform_ioremap_resource_byname(pdev, "rx_desc"); + if (IS_ERR(priv->rx_dma_desc)) + return PTR_ERR(priv->rx_dma_desc); + break; + default: + return -ENODEV; + } + + return 0; + +}; +EXPORT_SYMBOL_NS(altera_eth_dma_probe, NET_ALTERA); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/altera/altera_eth_dma.h b/drivers/net/ethernet/altera/altera_eth_dma.h new file mode 100644 index 000000000000..5007f2396221 --- /dev/null +++ b/drivers/net/ethernet/altera/altera_eth_dma.h @@ -0,0 +1,121 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* DMA support for Intel FPGA Quad-Speed Ethernet MAC driver + * Copyright (C) 2023 Intel Corporation. All rights reserved + */ + +#ifndef __ALTERA_ETH_DMA_H__ +#define __ALTERA_ETH_DMA_H__ + +#include + +struct device; +struct net_device; +struct platform_device; +struct sk_buff; + +struct altera_dma_buffer; +struct altera_dma_private; +struct msgdma_pref_extended_desc; + +/* Wrapper around a pointer to a socket buffer, + * so a DMA handle can be stored along with the buffer. + */ + +struct altera_dma_buffer { + struct list_head lh; + struct sk_buff *skb; + dma_addr_t dma_addr; + u32 len; + int mapped_as_page; +}; + +struct altera_dma_private { + struct net_device *dev; + struct device *device; + + /* mSGDMA Rx Dispatcher address space */ + void __iomem *rx_dma_csr; + void __iomem *rx_dma_desc; + void __iomem *rx_dma_resp; + + /* mSGDMA Tx Dispatcher address space */ + void __iomem *tx_dma_csr; + void __iomem *tx_dma_desc; + void __iomem *tx_dma_resp; + + /* mSGDMA Rx Prefecher address space */ + void __iomem *rx_pref_csr; + struct msgdma_pref_extended_desc *pref_rxdesc; + dma_addr_t pref_rxdescphys; + u32 pref_rx_prod; + + /* mSGDMA Tx Prefecher address space */ + void __iomem *tx_pref_csr; + struct msgdma_pref_extended_desc *pref_txdesc; + dma_addr_t pref_txdescphys; + u32 rx_poll_freq; + u32 tx_poll_freq; + + /* Rx buffers queue */ + struct altera_dma_buffer *rx_ring; + u32 rx_cons; + u32 rx_prod; + u32 rx_ring_size; + u32 rx_dma_buf_sz; + + /* Tx ring buffer */ + struct altera_dma_buffer *tx_ring; + u32 tx_prod; + u32 tx_cons; + u32 tx_ring_size; + + /* Descriptor memory info for managing SGDMA */ + u32 txdescmem; + u32 rxdescmem; + dma_addr_t rxdescmem_busaddr; + dma_addr_t txdescmem_busaddr; + u32 txctrlreg; + u32 rxctrlreg; + dma_addr_t rxdescphys; + dma_addr_t txdescphys; + + struct list_head txlisthd; + struct list_head rxlisthd; + + int hwts_tx_en; + int hwts_rx_en; + + /* ethtool msglvl option */ + u32 msg_enable; +}; + +enum altera_dma_type { + ALTERA_DTYPE_SGDMA = 1, + ALTERA_DTYPE_MSGDMA = 2, +}; + +/* standard DMA interface for SGDMA and MSGDMA */ +struct altera_dmaops { + enum altera_dma_type altera_dtype; + int dmamask; + void (*reset_dma)(struct altera_dma_private *priv); + void (*enable_txirq)(struct altera_dma_private *priv); + void (*enable_rxirq)(struct altera_dma_private *priv); + void (*disable_txirq)(struct altera_dma_private *priv); + void (*disable_rxirq)(struct altera_dma_private *priv); + void (*clear_txirq)(struct altera_dma_private *priv); + void (*clear_rxirq)(struct altera_dma_private *priv); + int (*tx_buffer)(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); + u32 (*tx_completions)(struct altera_dma_private *priv); + void (*add_rx_desc)(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); + u32 (*get_rx_status)(struct altera_dma_private *priv); + int (*init_dma)(struct altera_dma_private *priv); + void (*uninit_dma)(struct altera_dma_private *priv); + void (*start_rxdma)(struct altera_dma_private *priv); + void (*start_txdma)(struct altera_dma_private *priv); +}; + +int altera_eth_dma_probe(struct platform_device *pdev, struct altera_dma_private *priv, + enum altera_dma_type type); + +#endif /* __ALTERA_ETH_DMA_H__ */ diff --git a/drivers/net/ethernet/altera/altera_msgdma.c b/drivers/net/ethernet/altera/altera_msgdma.c index 9581c05a5449..d15400c33d05 100644 --- a/drivers/net/ethernet/altera/altera_msgdma.c +++ b/drivers/net/ethernet/altera/altera_msgdma.c @@ -5,26 +5,27 @@ #include +#include "altera_eth_dma.h" #include "altera_msgdma.h" #include "altera_msgdmahw.h" #include "altera_tse.h" #include "altera_utils.h" /* No initialization work to do for MSGDMA */ -int msgdma_initialize(struct altera_tse_private *priv) +int msgdma_initialize(struct altera_dma_private *priv) { return 0; } -void msgdma_uninitialize(struct altera_tse_private *priv) +void msgdma_uninitialize(struct altera_dma_private *priv) { } -void msgdma_start_rxdma(struct altera_tse_private *priv) +void msgdma_start_rxdma(struct altera_dma_private *priv) { } -void msgdma_reset(struct altera_tse_private *priv) +void msgdma_reset(struct altera_dma_private *priv) { int counter; @@ -72,42 +73,42 @@ void msgdma_reset(struct altera_tse_private *priv) csrwr32(MSGDMA_CSR_STAT_MASK, priv->tx_dma_csr, msgdma_csroffs(status)); } -void msgdma_disable_rxirq(struct altera_tse_private *priv) +void msgdma_disable_rxirq(struct altera_dma_private *priv) { tse_clear_bit(priv->rx_dma_csr, msgdma_csroffs(control), MSGDMA_CSR_CTL_GLOBAL_INTR); } -void msgdma_enable_rxirq(struct altera_tse_private *priv) +void msgdma_enable_rxirq(struct altera_dma_private *priv) { tse_set_bit(priv->rx_dma_csr, msgdma_csroffs(control), MSGDMA_CSR_CTL_GLOBAL_INTR); } -void msgdma_disable_txirq(struct altera_tse_private *priv) +void msgdma_disable_txirq(struct altera_dma_private *priv) { tse_clear_bit(priv->tx_dma_csr, msgdma_csroffs(control), MSGDMA_CSR_CTL_GLOBAL_INTR); } -void msgdma_enable_txirq(struct altera_tse_private *priv) +void msgdma_enable_txirq(struct altera_dma_private *priv) { tse_set_bit(priv->tx_dma_csr, msgdma_csroffs(control), MSGDMA_CSR_CTL_GLOBAL_INTR); } -void msgdma_clear_rxirq(struct altera_tse_private *priv) +void msgdma_clear_rxirq(struct altera_dma_private *priv) { csrwr32(MSGDMA_CSR_STAT_IRQ, priv->rx_dma_csr, msgdma_csroffs(status)); } -void msgdma_clear_txirq(struct altera_tse_private *priv) +void msgdma_clear_txirq(struct altera_dma_private *priv) { csrwr32(MSGDMA_CSR_STAT_IRQ, priv->tx_dma_csr, msgdma_csroffs(status)); } /* return 0 to indicate transmit is pending */ -int msgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) +int msgdma_tx_buffer(struct altera_dma_private *priv, struct altera_dma_buffer *buffer) { csrwr32(lower_32_bits(buffer->dma_addr), priv->tx_dma_desc, msgdma_descroffs(read_addr_lo)); @@ -124,7 +125,7 @@ int msgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) return 0; } -u32 msgdma_tx_completions(struct altera_tse_private *priv) +u32 msgdma_tx_completions(struct altera_dma_private *priv) { u32 ready = 0; u32 inuse; @@ -150,8 +151,8 @@ u32 msgdma_tx_completions(struct altera_tse_private *priv) /* Put buffer to the mSGDMA RX FIFO */ -void msgdma_add_rx_desc(struct altera_tse_private *priv, - struct tse_buffer *rxbuffer) +void msgdma_add_rx_desc(struct altera_dma_private *priv, + struct altera_dma_buffer *rxbuffer) { u32 len = priv->rx_dma_buf_sz; dma_addr_t dma_addr = rxbuffer->dma_addr; @@ -177,7 +178,7 @@ void msgdma_add_rx_desc(struct altera_tse_private *priv, /* status is returned on upper 16 bits, * length is returned in lower 16 bits */ -u32 msgdma_rx_status(struct altera_tse_private *priv) +u32 msgdma_rx_status(struct altera_dma_private *priv) { u32 rxstatus = 0; u32 pktlength; diff --git a/drivers/net/ethernet/altera/altera_msgdma.h b/drivers/net/ethernet/altera/altera_msgdma.h index 9813fbfff4d3..ac04eb676bc8 100644 --- a/drivers/net/ethernet/altera/altera_msgdma.h +++ b/drivers/net/ethernet/altera/altera_msgdma.h @@ -6,19 +6,19 @@ #ifndef __ALTERA_MSGDMA_H__ #define __ALTERA_MSGDMA_H__ -void msgdma_reset(struct altera_tse_private *); -void msgdma_enable_txirq(struct altera_tse_private *); -void msgdma_enable_rxirq(struct altera_tse_private *); -void msgdma_disable_rxirq(struct altera_tse_private *); -void msgdma_disable_txirq(struct altera_tse_private *); -void msgdma_clear_rxirq(struct altera_tse_private *); -void msgdma_clear_txirq(struct altera_tse_private *); -u32 msgdma_tx_completions(struct altera_tse_private *); -void msgdma_add_rx_desc(struct altera_tse_private *, struct tse_buffer *); -int msgdma_tx_buffer(struct altera_tse_private *, struct tse_buffer *); -u32 msgdma_rx_status(struct altera_tse_private *); -int msgdma_initialize(struct altera_tse_private *); -void msgdma_uninitialize(struct altera_tse_private *); -void msgdma_start_rxdma(struct altera_tse_private *); +void msgdma_reset(struct altera_dma_private *priv); +void msgdma_enable_txirq(struct altera_dma_private *priv); +void msgdma_enable_rxirq(struct altera_dma_private *priv); +void msgdma_disable_rxirq(struct altera_dma_private *priv); +void msgdma_disable_txirq(struct altera_dma_private *priv); +void msgdma_clear_rxirq(struct altera_dma_private *priv); +void msgdma_clear_txirq(struct altera_dma_private *priv); +u32 msgdma_tx_completions(struct altera_dma_private *priv); +void msgdma_add_rx_desc(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); +int msgdma_tx_buffer(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); +u32 msgdma_rx_status(struct altera_dma_private *priv); +int msgdma_initialize(struct altera_dma_private *priv); +void msgdma_uninitialize(struct altera_dma_private *priv); +void msgdma_start_rxdma(struct altera_dma_private *priv); #endif /* __ALTERA_MSGDMA_H__ */ diff --git a/drivers/net/ethernet/altera/altera_sgdma.c b/drivers/net/ethernet/altera/altera_sgdma.c index f6c9904c88d0..14f7b0115eda 100644 --- a/drivers/net/ethernet/altera/altera_sgdma.c +++ b/drivers/net/ethernet/altera/altera_sgdma.c @@ -4,10 +4,11 @@ */ #include +#include +#include "altera_eth_dma.h" #include "altera_sgdma.h" #include "altera_sgdmahw.h" -#include "altera_tse.h" #include "altera_utils.h" static void sgdma_setup_descrip(struct sgdma_descrip __iomem *desc, @@ -20,39 +21,39 @@ static void sgdma_setup_descrip(struct sgdma_descrip __iomem *desc, int rfixed, int wfixed); -static int sgdma_async_write(struct altera_tse_private *priv, +static int sgdma_async_write(struct altera_dma_private *priv, struct sgdma_descrip __iomem *desc); -static int sgdma_async_read(struct altera_tse_private *priv); +static int sgdma_async_read(struct altera_dma_private *priv); static dma_addr_t -sgdma_txphysaddr(struct altera_tse_private *priv, +sgdma_txphysaddr(struct altera_dma_private *priv, struct sgdma_descrip __iomem *desc); static dma_addr_t -sgdma_rxphysaddr(struct altera_tse_private *priv, +sgdma_rxphysaddr(struct altera_dma_private *priv, struct sgdma_descrip __iomem *desc); -static int sgdma_txbusy(struct altera_tse_private *priv); +static int sgdma_txbusy(struct altera_dma_private *priv); -static int sgdma_rxbusy(struct altera_tse_private *priv); +static int sgdma_rxbusy(struct altera_dma_private *priv); static void -queue_tx(struct altera_tse_private *priv, struct tse_buffer *buffer); +queue_tx(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); static void -queue_rx(struct altera_tse_private *priv, struct tse_buffer *buffer); +queue_rx(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); -static struct tse_buffer * -dequeue_tx(struct altera_tse_private *priv); +static struct altera_dma_buffer * +dequeue_tx(struct altera_dma_private *priv); -static struct tse_buffer * -dequeue_rx(struct altera_tse_private *priv); +static struct altera_dma_buffer * +dequeue_rx(struct altera_dma_private *priv); -static struct tse_buffer * -queue_rx_peekhead(struct altera_tse_private *priv); +static struct altera_dma_buffer * +queue_rx_peekhead(struct altera_dma_private *priv); -int sgdma_initialize(struct altera_tse_private *priv) +int sgdma_initialize(struct altera_dma_private *priv) { priv->txctrlreg = SGDMA_CTRLREG_ILASTD | SGDMA_CTRLREG_INTEN; @@ -97,7 +98,7 @@ int sgdma_initialize(struct altera_tse_private *priv) return 0; } -void sgdma_uninitialize(struct altera_tse_private *priv) +void sgdma_uninitialize(struct altera_dma_private *priv) { if (priv->rxdescphys) dma_unmap_single(priv->device, priv->rxdescphys, @@ -111,7 +112,7 @@ void sgdma_uninitialize(struct altera_tse_private *priv) /* This function resets the SGDMA controller and clears the * descriptor memory used for transmits and receives. */ -void sgdma_reset(struct altera_tse_private *priv) +void sgdma_reset(struct altera_dma_private *priv) { /* Initialize descriptor memory to 0 */ memset_io(priv->tx_dma_desc, 0, priv->txdescmem); @@ -129,29 +130,29 @@ void sgdma_reset(struct altera_tse_private *priv) * and disable */ -void sgdma_enable_rxirq(struct altera_tse_private *priv) +void sgdma_enable_rxirq(struct altera_dma_private *priv) { } -void sgdma_enable_txirq(struct altera_tse_private *priv) +void sgdma_enable_txirq(struct altera_dma_private *priv) { } -void sgdma_disable_rxirq(struct altera_tse_private *priv) +void sgdma_disable_rxirq(struct altera_dma_private *priv) { } -void sgdma_disable_txirq(struct altera_tse_private *priv) +void sgdma_disable_txirq(struct altera_dma_private *priv) { } -void sgdma_clear_rxirq(struct altera_tse_private *priv) +void sgdma_clear_rxirq(struct altera_dma_private *priv) { tse_set_bit(priv->rx_dma_csr, sgdma_csroffs(control), SGDMA_CTRLREG_CLRINT); } -void sgdma_clear_txirq(struct altera_tse_private *priv) +void sgdma_clear_txirq(struct altera_dma_private *priv) { tse_set_bit(priv->tx_dma_csr, sgdma_csroffs(control), SGDMA_CTRLREG_CLRINT); @@ -162,7 +163,7 @@ void sgdma_clear_txirq(struct altera_tse_private *priv) * * tx_lock is held by the caller */ -int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) +int sgdma_tx_buffer(struct altera_dma_private *priv, struct altera_dma_buffer *buffer) { struct sgdma_descrip __iomem *descbase = (struct sgdma_descrip __iomem *)priv->tx_dma_desc; @@ -193,7 +194,7 @@ int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *buffer) } /* tx_lock held to protect access to queued tx list */ -u32 sgdma_tx_completions(struct altera_tse_private *priv) +u32 sgdma_tx_completions(struct altera_dma_private *priv) { u32 ready = 0; @@ -207,13 +208,13 @@ u32 sgdma_tx_completions(struct altera_tse_private *priv) return ready; } -void sgdma_start_rxdma(struct altera_tse_private *priv) +void sgdma_start_rxdma(struct altera_dma_private *priv) { sgdma_async_read(priv); } -void sgdma_add_rx_desc(struct altera_tse_private *priv, - struct tse_buffer *rxbuffer) +void sgdma_add_rx_desc(struct altera_dma_private *priv, + struct altera_dma_buffer *rxbuffer) { queue_rx(priv, rxbuffer); } @@ -221,12 +222,12 @@ void sgdma_add_rx_desc(struct altera_tse_private *priv, /* status is returned on upper 16 bits, * length is returned in lower 16 bits */ -u32 sgdma_rx_status(struct altera_tse_private *priv) +u32 sgdma_rx_status(struct altera_dma_private *priv) { struct sgdma_descrip __iomem *base = (struct sgdma_descrip __iomem *)priv->rx_dma_desc; struct sgdma_descrip __iomem *desc = NULL; - struct tse_buffer *rxbuffer = NULL; + struct altera_dma_buffer *rxbuffer = NULL; unsigned int rxstatus = 0; u32 sts = csrrd32(priv->rx_dma_csr, sgdma_csroffs(status)); @@ -328,14 +329,14 @@ static void sgdma_setup_descrip(struct sgdma_descrip __iomem *desc, * If read status indicate not busy and a status, restart the async * DMA read. */ -static int sgdma_async_read(struct altera_tse_private *priv) +static int sgdma_async_read(struct altera_dma_private *priv) { struct sgdma_descrip __iomem *descbase = (struct sgdma_descrip __iomem *)priv->rx_dma_desc; struct sgdma_descrip __iomem *cdesc = &descbase[0]; struct sgdma_descrip __iomem *ndesc = &descbase[1]; - struct tse_buffer *rxbuffer = NULL; + struct altera_dma_buffer *rxbuffer = NULL; if (!sgdma_rxbusy(priv)) { rxbuffer = queue_rx_peekhead(priv); @@ -373,7 +374,7 @@ static int sgdma_async_read(struct altera_tse_private *priv) return 0; } -static int sgdma_async_write(struct altera_tse_private *priv, +static int sgdma_async_write(struct altera_dma_private *priv, struct sgdma_descrip __iomem *desc) { if (sgdma_txbusy(priv)) @@ -398,7 +399,7 @@ static int sgdma_async_write(struct altera_tse_private *priv, } static dma_addr_t -sgdma_txphysaddr(struct altera_tse_private *priv, +sgdma_txphysaddr(struct altera_dma_private *priv, struct sgdma_descrip __iomem *desc) { dma_addr_t paddr = priv->txdescmem_busaddr; @@ -408,7 +409,7 @@ sgdma_txphysaddr(struct altera_tse_private *priv, } static dma_addr_t -sgdma_rxphysaddr(struct altera_tse_private *priv, +sgdma_rxphysaddr(struct altera_dma_private *priv, struct sgdma_descrip __iomem *desc) { dma_addr_t paddr = priv->rxdescmem_busaddr; @@ -439,7 +440,7 @@ sgdma_rxphysaddr(struct altera_tse_private *priv, * primitive to avoid simultaneous pushes/pops to the list. */ static void -queue_tx(struct altera_tse_private *priv, struct tse_buffer *buffer) +queue_tx(struct altera_dma_private *priv, struct altera_dma_buffer *buffer) { list_add_tail(&buffer->lh, &priv->txlisthd); } @@ -449,7 +450,7 @@ queue_tx(struct altera_tse_private *priv, struct tse_buffer *buffer) * primitive to avoid simultaneous pushes/pops to the list. */ static void -queue_rx(struct altera_tse_private *priv, struct tse_buffer *buffer) +queue_rx(struct altera_dma_private *priv, struct altera_dma_buffer *buffer) { list_add_tail(&buffer->lh, &priv->rxlisthd); } @@ -459,12 +460,12 @@ queue_rx(struct altera_tse_private *priv, struct tse_buffer *buffer) * assumes the caller is managing and holding a mutual exclusion * primitive to avoid simultaneous pushes/pops to the list. */ -static struct tse_buffer * -dequeue_tx(struct altera_tse_private *priv) +static struct altera_dma_buffer * +dequeue_tx(struct altera_dma_private *priv) { - struct tse_buffer *buffer = NULL; + struct altera_dma_buffer *buffer = NULL; - list_remove_head(&priv->txlisthd, buffer, struct tse_buffer, lh); + list_remove_head(&priv->txlisthd, buffer, struct altera_dma_buffer, lh); return buffer; } @@ -473,12 +474,12 @@ dequeue_tx(struct altera_tse_private *priv) * assumes the caller is managing and holding a mutual exclusion * primitive to avoid simultaneous pushes/pops to the list. */ -static struct tse_buffer * -dequeue_rx(struct altera_tse_private *priv) +static struct altera_dma_buffer * +dequeue_rx(struct altera_dma_private *priv) { - struct tse_buffer *buffer = NULL; + struct altera_dma_buffer *buffer = NULL; - list_remove_head(&priv->rxlisthd, buffer, struct tse_buffer, lh); + list_remove_head(&priv->rxlisthd, buffer, struct altera_dma_buffer, lh); return buffer; } @@ -488,18 +489,18 @@ dequeue_rx(struct altera_tse_private *priv) * primitive to avoid simultaneous pushes/pops to the list while the * head is being examined. */ -static struct tse_buffer * -queue_rx_peekhead(struct altera_tse_private *priv) +static struct altera_dma_buffer * +queue_rx_peekhead(struct altera_dma_private *priv) { - struct tse_buffer *buffer = NULL; + struct altera_dma_buffer *buffer = NULL; - list_peek_head(&priv->rxlisthd, buffer, struct tse_buffer, lh); + list_peek_head(&priv->rxlisthd, buffer, struct altera_dma_buffer, lh); return buffer; } /* check and return rx sgdma status without polling */ -static int sgdma_rxbusy(struct altera_tse_private *priv) +static int sgdma_rxbusy(struct altera_dma_private *priv) { return csrrd32(priv->rx_dma_csr, sgdma_csroffs(status)) & SGDMA_STSREG_BUSY; @@ -508,7 +509,7 @@ static int sgdma_rxbusy(struct altera_tse_private *priv) /* waits for the tx sgdma to finish it's current operation, returns 0 * when it transitions to nonbusy, returns 1 if the operation times out */ -static int sgdma_txbusy(struct altera_tse_private *priv) +static int sgdma_txbusy(struct altera_dma_private *priv) { int delay = 0; diff --git a/drivers/net/ethernet/altera/altera_sgdma.h b/drivers/net/ethernet/altera/altera_sgdma.h index 08afe1c9994f..998deb74c5f1 100644 --- a/drivers/net/ethernet/altera/altera_sgdma.h +++ b/drivers/net/ethernet/altera/altera_sgdma.h @@ -6,20 +6,20 @@ #ifndef __ALTERA_SGDMA_H__ #define __ALTERA_SGDMA_H__ -void sgdma_reset(struct altera_tse_private *); -void sgdma_enable_txirq(struct altera_tse_private *); -void sgdma_enable_rxirq(struct altera_tse_private *); -void sgdma_disable_rxirq(struct altera_tse_private *); -void sgdma_disable_txirq(struct altera_tse_private *); -void sgdma_clear_rxirq(struct altera_tse_private *); -void sgdma_clear_txirq(struct altera_tse_private *); -int sgdma_tx_buffer(struct altera_tse_private *priv, struct tse_buffer *); -u32 sgdma_tx_completions(struct altera_tse_private *); -void sgdma_add_rx_desc(struct altera_tse_private *priv, struct tse_buffer *); -void sgdma_status(struct altera_tse_private *); -u32 sgdma_rx_status(struct altera_tse_private *); -int sgdma_initialize(struct altera_tse_private *); -void sgdma_uninitialize(struct altera_tse_private *); -void sgdma_start_rxdma(struct altera_tse_private *); +void sgdma_reset(struct altera_dma_private *priv); +void sgdma_enable_txirq(struct altera_dma_private *priv); +void sgdma_enable_rxirq(struct altera_dma_private *priv); +void sgdma_disable_rxirq(struct altera_dma_private *priv); +void sgdma_disable_txirq(struct altera_dma_private *priv); +void sgdma_clear_rxirq(struct altera_dma_private *priv); +void sgdma_clear_txirq(struct altera_dma_private *priv); +int sgdma_tx_buffer(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); +u32 sgdma_tx_completions(struct altera_dma_private *priv); +void sgdma_add_rx_desc(struct altera_dma_private *priv, struct altera_dma_buffer *buffer); +void sgdma_status(struct altera_dma_private *priv); +u32 sgdma_rx_status(struct altera_dma_private *priv); +int sgdma_initialize(struct altera_dma_private *priv); +void sgdma_uninitialize(struct altera_dma_private *priv); +void sgdma_start_rxdma(struct altera_dma_private *priv); #endif /* __ALTERA_SGDMA_H__ */ diff --git a/drivers/net/ethernet/altera/altera_tse.h b/drivers/net/ethernet/altera/altera_tse.h index 4874139e7cdf..471ffcc98ff2 100644 --- a/drivers/net/ethernet/altera/altera_tse.h +++ b/drivers/net/ethernet/altera/altera_tse.h @@ -368,29 +368,6 @@ struct tse_buffer { struct altera_tse_private; -#define ALTERA_DTYPE_SGDMA 1 -#define ALTERA_DTYPE_MSGDMA 2 - -/* standard DMA interface for SGDMA and MSGDMA */ -struct altera_dmaops { - int altera_dtype; - int dmamask; - void (*reset_dma)(struct altera_tse_private *); - void (*enable_txirq)(struct altera_tse_private *); - void (*enable_rxirq)(struct altera_tse_private *); - void (*disable_txirq)(struct altera_tse_private *); - void (*disable_rxirq)(struct altera_tse_private *); - void (*clear_txirq)(struct altera_tse_private *); - void (*clear_rxirq)(struct altera_tse_private *); - int (*tx_buffer)(struct altera_tse_private *, struct tse_buffer *); - u32 (*tx_completions)(struct altera_tse_private *); - void (*add_rx_desc)(struct altera_tse_private *, struct tse_buffer *); - u32 (*get_rx_status)(struct altera_tse_private *); - int (*init_dma)(struct altera_tse_private *); - void (*uninit_dma)(struct altera_tse_private *); - void (*start_rxdma)(struct altera_tse_private *); -}; - /* This structure is private to each device. */ struct altera_tse_private { @@ -400,6 +377,9 @@ struct altera_tse_private { /* MAC address space */ struct altera_tse_mac __iomem *mac_dev; + + /* Shared DMA structure */ + struct altera_dma_private dma_priv; /* TSE Revision */ u32 revision; diff --git a/drivers/net/ethernet/altera/altera_tse_ethtool.c b/drivers/net/ethernet/altera/altera_tse_ethtool.c index d34373bac94a..6253bfe86e47 100644 --- a/drivers/net/ethernet/altera/altera_tse_ethtool.c +++ b/drivers/net/ethernet/altera/altera_tse_ethtool.c @@ -21,6 +21,7 @@ #include #include +#include "altera_eth_dma.h" #include "altera_tse.h" #include "altera_utils.h" diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c index f98810eac44f..1b66970a40e6 100644 --- a/drivers/net/ethernet/altera/altera_tse_main.c +++ b/drivers/net/ethernet/altera/altera_tse_main.c @@ -29,18 +29,19 @@ #include #include #include -#include +#include #include #include +#include #include #include #include -#include #include #include #include +#include "altera_eth_dma.h" #include "altera_msgdma.h" #include "altera_sgdma.h" #include "altera_tse.h" @@ -79,13 +80,15 @@ MODULE_PARM_DESC(dma_tx_num, "Number of descriptors in the TX list"); /* Allow network stack to resume queuing packets after we've * finished transmitting at least 1/4 of the packets in the queue. */ -#define TSE_TX_THRESH(x) (x->tx_ring_size / 4) +#define TSE_TX_THRESH(x) (x->dma_priv.tx_ring_size / 4) #define TXQUEUESTOP_THRESHHOLD 2 static inline u32 tse_tx_avail(struct altera_tse_private *priv) { - return priv->tx_cons + priv->tx_ring_size - priv->tx_prod - 1; + struct altera_dma_private *dma = &priv->dma_priv; + + return dma->tx_cons + dma->tx_ring_size - dma->tx_prod - 1; } /* MDIO specific functions @@ -194,7 +197,7 @@ static void altera_tse_mdio_destroy(struct net_device *dev) } static int tse_init_rx_buffer(struct altera_tse_private *priv, - struct tse_buffer *rxbuffer, int len) + struct altera_dma_buffer *rxbuffer, int len) { rxbuffer->skb = netdev_alloc_skb_ip_align(priv->dev, len); if (!rxbuffer->skb) @@ -215,7 +218,7 @@ static int tse_init_rx_buffer(struct altera_tse_private *priv, } static void tse_free_rx_buffer(struct altera_tse_private *priv, - struct tse_buffer *rxbuffer) + struct altera_dma_buffer *rxbuffer) { dma_addr_t dma_addr = rxbuffer->dma_addr; struct sk_buff *skb = rxbuffer->skb; @@ -234,7 +237,7 @@ static void tse_free_rx_buffer(struct altera_tse_private *priv, /* Unmap and free Tx buffer resources */ static void tse_free_tx_buffer(struct altera_tse_private *priv, - struct tse_buffer *buffer) + struct altera_dma_buffer *buffer) { if (buffer->dma_addr) { if (buffer->mapped_as_page) @@ -253,42 +256,42 @@ static void tse_free_tx_buffer(struct altera_tse_private *priv, static int alloc_init_skbufs(struct altera_tse_private *priv) { - unsigned int rx_descs = priv->rx_ring_size; - unsigned int tx_descs = priv->tx_ring_size; + struct altera_dma_private *dma = &priv->dma_priv; + unsigned int rx_descs = dma->rx_ring_size; + unsigned int tx_descs = dma->tx_ring_size; int ret = -ENOMEM; int i; /* Create Rx ring buffer */ - priv->rx_ring = kcalloc(rx_descs, sizeof(struct tse_buffer), GFP_KERNEL); - if (!priv->rx_ring) + dma->rx_ring = kcalloc(rx_descs, sizeof(struct altera_dma_private), GFP_KERNEL); + if (!dma->rx_ring) goto err_rx_ring; /* Create Tx ring buffer */ - priv->tx_ring = kcalloc(tx_descs, sizeof(struct tse_buffer), GFP_KERNEL); - if (!priv->tx_ring) + dma->tx_ring = kcalloc(tx_descs, sizeof(struct altera_dma_private), GFP_KERNEL); + if (!dma->tx_ring) goto err_tx_ring; - priv->tx_cons = 0; - priv->tx_prod = 0; + dma->tx_cons = 0; + dma->tx_prod = 0; /* Init Rx ring */ for (i = 0; i < rx_descs; i++) { - ret = tse_init_rx_buffer(priv, &priv->rx_ring[i], - priv->rx_dma_buf_sz); + ret = tse_init_rx_buffer(priv, &priv->dma_priv.rx_ring[i], dma->rx_dma_buf_sz); if (ret) goto err_init_rx_buffers; } - priv->rx_cons = 0; - priv->rx_prod = 0; + dma->rx_cons = 0; + dma->rx_prod = 0; return 0; err_init_rx_buffers: while (--i >= 0) - tse_free_rx_buffer(priv, &priv->rx_ring[i]); - kfree(priv->tx_ring); + tse_free_rx_buffer(priv, &priv->dma_priv.rx_ring[i]); + kfree(dma->tx_ring); err_tx_ring: - kfree(priv->rx_ring); + kfree(dma->rx_ring); err_rx_ring: return ret; } @@ -296,36 +299,38 @@ static int alloc_init_skbufs(struct altera_tse_private *priv) static void free_skbufs(struct net_device *dev) { struct altera_tse_private *priv = netdev_priv(dev); - unsigned int rx_descs = priv->rx_ring_size; - unsigned int tx_descs = priv->tx_ring_size; + struct altera_dma_private *dma = &priv->dma_priv; + unsigned int rx_descs = dma->rx_ring_size; + unsigned int tx_descs = dma->tx_ring_size; int i; /* Release the DMA TX/RX socket buffers */ for (i = 0; i < rx_descs; i++) - tse_free_rx_buffer(priv, &priv->rx_ring[i]); + tse_free_rx_buffer(priv, &priv->dma_priv.rx_ring[i]); for (i = 0; i < tx_descs; i++) - tse_free_tx_buffer(priv, &priv->tx_ring[i]); + tse_free_tx_buffer(priv, &priv->dma_priv.tx_ring[i]); - kfree(priv->tx_ring); + kfree(dma->tx_ring); } /* Reallocate the skb for the reception process */ static inline void tse_rx_refill(struct altera_tse_private *priv) { - unsigned int rxsize = priv->rx_ring_size; + struct altera_dma_private *dma = &priv->dma_priv; + unsigned int rxsize = dma->rx_ring_size; unsigned int entry; int ret; - for (; priv->rx_cons - priv->rx_prod > 0; priv->rx_prod++) { - entry = priv->rx_prod % rxsize; - if (likely(priv->rx_ring[entry].skb == NULL)) { - ret = tse_init_rx_buffer(priv, &priv->rx_ring[entry], - priv->rx_dma_buf_sz); + for (; dma->rx_cons - dma->rx_prod > 0; dma->rx_prod++) { + entry = dma->rx_prod % rxsize; + if (likely(dma->rx_ring[entry].skb == NULL)) { + ret = tse_init_rx_buffer(priv, &priv->dma_priv.rx_ring[entry], + dma->rx_dma_buf_sz); if (unlikely(ret != 0)) break; - priv->dmaops->add_rx_desc(priv, &priv->rx_ring[entry]); + priv->dmaops->add_rx_desc(&priv->dma_priv, &priv->dma_priv.rx_ring[entry]); } } } @@ -350,7 +355,8 @@ static inline void tse_rx_vlan(struct net_device *dev, struct sk_buff *skb) */ static int tse_rx(struct altera_tse_private *priv, int limit) { - unsigned int entry = priv->rx_cons % priv->rx_ring_size; + struct altera_dma_private *dma = &priv->dma_priv; + unsigned int entry = dma->rx_cons % dma->rx_ring_size; unsigned int next_entry; unsigned int count = 0; struct sk_buff *skb; @@ -364,7 +370,7 @@ static int tse_rx(struct altera_tse_private *priv, int limit) * (reading the last byte of the response pops the value from the fifo.) */ while ((count < limit) && - ((rxstatus = priv->dmaops->get_rx_status(priv)) != 0)) { + ((rxstatus = priv->dmaops->get_rx_status(&priv->dma_priv)) != 0)) { pktstatus = rxstatus >> 16; pktlength = rxstatus & 0xffff; @@ -380,9 +386,9 @@ static int tse_rx(struct altera_tse_private *priv, int limit) pktlength -= 2; count++; - next_entry = (++priv->rx_cons) % priv->rx_ring_size; + next_entry = (++dma->rx_cons) % dma->rx_ring_size; - skb = priv->rx_ring[entry].skb; + skb = dma->rx_ring[entry].skb; if (unlikely(!skb)) { netdev_err(priv->dev, "%s: Inconsistent Rx descriptor chain\n", @@ -390,12 +396,12 @@ static int tse_rx(struct altera_tse_private *priv, int limit) priv->dev->stats.rx_dropped++; break; } - priv->rx_ring[entry].skb = NULL; + dma->rx_ring[entry].skb = NULL; skb_put(skb, pktlength); - dma_unmap_single(priv->device, priv->rx_ring[entry].dma_addr, - priv->rx_ring[entry].len, DMA_FROM_DEVICE); + dma_unmap_single(priv->device, dma->rx_ring[entry].dma_addr, + dma->rx_ring[entry].len, DMA_FROM_DEVICE); if (netif_msg_pktdata(priv)) { netdev_info(priv->dev, "frame received %d bytes\n", @@ -426,30 +432,31 @@ static int tse_rx(struct altera_tse_private *priv, int limit) */ static int tse_tx_complete(struct altera_tse_private *priv) { - unsigned int txsize = priv->tx_ring_size; - struct tse_buffer *tx_buff; + struct altera_dma_private *dma = &priv->dma_priv; + unsigned int txsize = dma->tx_ring_size; + struct altera_dma_buffer *tx_buff; unsigned int entry; int txcomplete = 0; u32 ready; spin_lock(&priv->tx_lock); - ready = priv->dmaops->tx_completions(priv); + ready = priv->dmaops->tx_completions(&priv->dma_priv); /* Free sent buffers */ - while (ready && (priv->tx_cons != priv->tx_prod)) { - entry = priv->tx_cons % txsize; - tx_buff = &priv->tx_ring[entry]; + while (ready && (dma->tx_cons != dma->tx_prod)) { + entry = dma->tx_cons % txsize; + tx_buff = &priv->dma_priv.tx_ring[entry]; if (netif_msg_tx_done(priv)) - netdev_dbg(priv->dev, "%s: curr %d, dirty %d\n", - __func__, priv->tx_prod, priv->tx_cons); + netdev_dbg(priv->dev, "%s: curr %d, dirty %d\n", __func__, dma->tx_prod, + dma->tx_cons); if (likely(tx_buff->skb)) priv->dev->stats.tx_packets++; tse_free_tx_buffer(priv, tx_buff); - priv->tx_cons++; + dma->tx_cons++; txcomplete++; ready--; @@ -492,8 +499,8 @@ static int tse_poll(struct napi_struct *napi, int budget) rxcomplete, budget); spin_lock_irqsave(&priv->rxdma_irq_lock, flags); - priv->dmaops->enable_rxirq(priv); - priv->dmaops->enable_txirq(priv); + priv->dmaops->enable_rxirq(&priv->dma_priv); + priv->dmaops->enable_txirq(&priv->dma_priv); spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); } return rxcomplete; @@ -514,14 +521,14 @@ static irqreturn_t altera_isr(int irq, void *dev_id) spin_lock(&priv->rxdma_irq_lock); /* reset IRQs */ - priv->dmaops->clear_rxirq(priv); - priv->dmaops->clear_txirq(priv); + priv->dmaops->clear_rxirq(&priv->dma_priv); + priv->dmaops->clear_txirq(&priv->dma_priv); spin_unlock(&priv->rxdma_irq_lock); if (likely(napi_schedule_prep(&priv->napi))) { spin_lock(&priv->rxdma_irq_lock); - priv->dmaops->disable_rxirq(priv); - priv->dmaops->disable_txirq(priv); + priv->dmaops->disable_rxirq(&priv->dma_priv); + priv->dmaops->disable_txirq(&priv->dma_priv); spin_unlock(&priv->rxdma_irq_lock); __napi_schedule(&priv->napi); } @@ -540,10 +547,11 @@ static irqreturn_t altera_isr(int irq, void *dev_id) static netdev_tx_t tse_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct altera_tse_private *priv = netdev_priv(dev); + struct altera_dma_private *dma = &priv->dma_priv; unsigned int nopaged_len = skb_headlen(skb); - unsigned int txsize = priv->tx_ring_size; + unsigned int txsize = dma->tx_ring_size; int nfrags = skb_shinfo(skb)->nr_frags; - struct tse_buffer *buffer = NULL; + struct altera_dma_buffer *buffer = NULL; netdev_tx_t ret = NETDEV_TX_OK; dma_addr_t dma_addr; unsigned int entry; @@ -563,8 +571,8 @@ static netdev_tx_t tse_start_xmit(struct sk_buff *skb, struct net_device *dev) } /* Map the first skb fragment */ - entry = priv->tx_prod % txsize; - buffer = &priv->tx_ring[entry]; + entry = dma->tx_prod % txsize; + buffer = &priv->dma_priv.tx_ring[entry]; dma_addr = dma_map_single(priv->device, skb->data, nopaged_len, DMA_TO_DEVICE); @@ -578,11 +586,11 @@ static netdev_tx_t tse_start_xmit(struct sk_buff *skb, struct net_device *dev) buffer->dma_addr = dma_addr; buffer->len = nopaged_len; - priv->dmaops->tx_buffer(priv, buffer); + priv->dmaops->tx_buffer(&priv->dma_priv, buffer); skb_tx_timestamp(skb); - priv->tx_prod++; + dma->tx_prod++; dev->stats.tx_bytes += skb->len; if (unlikely(tse_tx_avail(priv) <= TXQUEUESTOP_THRESHHOLD)) { @@ -875,12 +883,13 @@ static void tse_set_rx_mode(struct net_device *dev) static int tse_open(struct net_device *dev) { struct altera_tse_private *priv = netdev_priv(dev); + struct altera_dma_private *dma = &priv->dma_priv; unsigned long flags; int ret = 0; int i; /* Reset and configure TSE MAC and probe associated PHY */ - ret = priv->dmaops->init_dma(priv); + ret = priv->dmaops->init_dma(&priv->dma_priv); if (ret != 0) { netdev_err(dev, "Cannot initialize DMA\n"); goto phy_error; @@ -910,11 +919,11 @@ static int tse_open(struct net_device *dev) goto alloc_skbuf_error; } - priv->dmaops->reset_dma(priv); + priv->dmaops->reset_dma(&priv->dma_priv); /* Create and initialize the TX/RX descriptors chains. */ - priv->rx_ring_size = dma_rx_num; - priv->tx_ring_size = dma_tx_num; + dma->rx_ring_size = dma_rx_num; + dma->tx_ring_size = dma_tx_num; ret = alloc_init_skbufs(priv); if (ret) { netdev_err(dev, "DMA descriptors initialization failed\n"); @@ -942,12 +951,12 @@ static int tse_open(struct net_device *dev) /* Enable DMA interrupts */ spin_lock_irqsave(&priv->rxdma_irq_lock, flags); - priv->dmaops->enable_rxirq(priv); - priv->dmaops->enable_txirq(priv); + priv->dmaops->enable_rxirq(&priv->dma_priv); + priv->dmaops->enable_txirq(&priv->dma_priv); /* Setup RX descriptor chain */ - for (i = 0; i < priv->rx_ring_size; i++) - priv->dmaops->add_rx_desc(priv, &priv->rx_ring[i]); + for (i = 0; i < priv->dma_priv.rx_ring_size; i++) + priv->dmaops->add_rx_desc(&priv->dma_priv, &priv->dma_priv.rx_ring[i]); spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); @@ -961,7 +970,7 @@ static int tse_open(struct net_device *dev) napi_enable(&priv->napi); netif_start_queue(dev); - priv->dmaops->start_rxdma(priv); + priv->dmaops->start_rxdma(&priv->dma_priv); /* Start MAC Rx/Tx */ spin_lock(&priv->mac_cfg_lock); @@ -994,8 +1003,8 @@ static int tse_shutdown(struct net_device *dev) /* Disable DMA interrupts */ spin_lock_irqsave(&priv->rxdma_irq_lock, flags); - priv->dmaops->disable_rxirq(priv); - priv->dmaops->disable_txirq(priv); + priv->dmaops->disable_rxirq(&priv->dma_priv); + priv->dmaops->disable_txirq(&priv->dma_priv); spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); /* Free the IRQ lines */ @@ -1013,13 +1022,13 @@ static int tse_shutdown(struct net_device *dev) */ if (ret) netdev_dbg(dev, "Cannot reset MAC core (error: %d)\n", ret); - priv->dmaops->reset_dma(priv); + priv->dmaops->reset_dma(&priv->dma_priv); free_skbufs(dev); spin_unlock(&priv->tx_lock); spin_unlock(&priv->mac_cfg_lock); - priv->dmaops->uninit_dma(priv); + priv->dmaops->uninit_dma(&priv->dma_priv); return 0; } @@ -1134,11 +1143,9 @@ static int altera_tse_probe(struct platform_device *pdev) struct mdio_regmap_config mrc; struct resource *control_port; struct regmap *pcs_regmap; - struct resource *dma_res; struct resource *pcs_res; struct mii_bus *pcs_bus; struct net_device *ndev; - void __iomem *descmap; int ret = -ENODEV; ndev = alloc_etherdev(sizeof(struct altera_tse_private)); @@ -1151,69 +1158,18 @@ static int altera_tse_probe(struct platform_device *pdev) priv = netdev_priv(ndev); priv->device = &pdev->dev; + priv->dma_priv.device = &pdev->dev; priv->dev = ndev; + priv->dma_priv.dev = ndev; priv->msg_enable = netif_msg_init(debug, default_msg_level); + priv->dma_priv.msg_enable = netif_msg_init(debug, default_msg_level); priv->dmaops = device_get_match_data(&pdev->dev); - if (priv->dmaops && - priv->dmaops->altera_dtype == ALTERA_DTYPE_SGDMA) { - /* Get the mapped address to the SGDMA descriptor memory */ - ret = request_and_map(pdev, "s1", &dma_res, &descmap); - if (ret) - goto err_free_netdev; - - /* Start of that memory is for transmit descriptors */ - priv->tx_dma_desc = descmap; - - /* First half is for tx descriptors, other half for tx */ - priv->txdescmem = resource_size(dma_res)/2; - - priv->txdescmem_busaddr = (dma_addr_t)dma_res->start; - - priv->rx_dma_desc = (void __iomem *)((uintptr_t)(descmap + - priv->txdescmem)); - priv->rxdescmem = resource_size(dma_res)/2; - priv->rxdescmem_busaddr = dma_res->start; - priv->rxdescmem_busaddr += priv->txdescmem; - - if (upper_32_bits(priv->rxdescmem_busaddr)) { - dev_dbg(priv->device, - "SGDMA bus addresses greater than 32-bits\n"); - ret = -EINVAL; - goto err_free_netdev; - } - if (upper_32_bits(priv->txdescmem_busaddr)) { - dev_dbg(priv->device, - "SGDMA bus addresses greater than 32-bits\n"); - ret = -EINVAL; - goto err_free_netdev; - } - } else if (priv->dmaops && - priv->dmaops->altera_dtype == ALTERA_DTYPE_MSGDMA) { - ret = request_and_map(pdev, "rx_resp", &dma_res, - &priv->rx_dma_resp); - if (ret) - goto err_free_netdev; - - ret = request_and_map(pdev, "tx_desc", &dma_res, - &priv->tx_dma_desc); - if (ret) - goto err_free_netdev; - - priv->txdescmem = resource_size(dma_res); - priv->txdescmem_busaddr = dma_res->start; - - ret = request_and_map(pdev, "rx_desc", &dma_res, - &priv->rx_dma_desc); - if (ret) - goto err_free_netdev; - - priv->rxdescmem = resource_size(dma_res); - priv->rxdescmem_busaddr = dma_res->start; - - } else { - ret = -ENODEV; + /* Map DMA */ + ret = altera_eth_dma_probe(pdev, &priv->dma_priv, priv->dmaops->altera_dtype); + if (ret) { + dev_err(&pdev->dev, "cannot map DMA\n"); goto err_free_netdev; } @@ -1233,18 +1189,6 @@ static int altera_tse_probe(struct platform_device *pdev) if (ret) goto err_free_netdev; - /* xSGDMA Rx Dispatcher address space */ - ret = request_and_map(pdev, "rx_csr", &dma_res, - &priv->rx_dma_csr); - if (ret) - goto err_free_netdev; - - - /* xSGDMA Tx Dispatcher address space */ - ret = request_and_map(pdev, "tx_csr", &dma_res, - &priv->tx_dma_csr); - if (ret) - goto err_free_netdev; memset(&pcs_regmap_cfg, 0, sizeof(pcs_regmap_cfg)); memset(&mrc, 0, sizeof(mrc)); @@ -1341,7 +1285,7 @@ static int altera_tse_probe(struct platform_device *pdev) /* The DMA buffer size already accounts for an alignment bias * to avoid unaligned access exceptions for the NIOS processor, */ - priv->rx_dma_buf_sz = ALTERA_RXDMABUFFER_SIZE; + priv->dma_priv.rx_dma_buf_sz = ALTERA_RXDMABUFFER_SIZE; /* get default MAC address from device tree */ ret = of_get_ethdev_address(pdev->dev.of_node, ndev); @@ -1530,4 +1474,5 @@ module_platform_driver(altera_tse_driver); MODULE_AUTHOR("Altera Corporation"); MODULE_DESCRIPTION("Altera Triple Speed Ethernet MAC driver"); +MODULE_IMPORT_NS(NET_ALTERA); MODULE_LICENSE("GPL v2"); diff --git a/drivers/net/ethernet/altera/altera_utils.c b/drivers/net/ethernet/altera/altera_utils.c index e6a7fc9d8fb1..09a53f879b51 100644 --- a/drivers/net/ethernet/altera/altera_utils.c +++ b/drivers/net/ethernet/altera/altera_utils.c @@ -3,6 +3,7 @@ * Copyright (C) 2014 Altera Corporation. All rights reserved */ +#include "altera_eth_dma.h" #include "altera_tse.h" #include "altera_utils.h"