[net-next] ethtool: ice: Support for RSS settings to GTP from ethtool

Message ID 20231008075221.61863-1-hayatake396@gmail.com
State New
Headers
Series [net-next] ethtool: ice: Support for RSS settings to GTP from ethtool |

Commit Message

Takeru Hayasaka Oct. 8, 2023, 7:52 a.m. UTC
  This is a patch that enables RSS functionality for GTP packets using
ethtool.
A user can include her TEID and make RSS work for GTP-U over IPv4 by
doing the following:
`ethtool -N ens3 rx-flow-hash gtpu4 sd`
In addition to gtpu(4|6), we now support gtpc(4|6), gtpu(4|6)e,
gtpu(4|6)u, and gtpu(4|6)d.

Signed-off-by: Takeru Hayasaka <hayatake396@gmail.com>
---
 sorry. i signed off by mistake in email. i fixed.
 drivers/net/ethernet/intel/ice/ice_ethtool.c | 98 +++++++++++++++++++-
 drivers/net/ethernet/intel/ice/ice_flow.h    | 15 +++
 drivers/net/ethernet/intel/ice/ice_lib.c     | 70 ++++++++++++++
 include/uapi/linux/ethtool.h                 | 10 ++
 4 files changed, 191 insertions(+), 2 deletions(-)
  

Comments

Jakub Kicinski Oct. 10, 2023, 7:32 p.m. UTC | #1
On Sun,  8 Oct 2023 07:52:22 +0000 Takeru Hayasaka wrote:
> This is a patch that enables RSS functionality for GTP packets using
> ethtool.
> A user can include her TEID and make RSS work for GTP-U over IPv4 by
> doing the following:
> `ethtool -N ens3 rx-flow-hash gtpu4 sd`
> In addition to gtpu(4|6), we now support gtpc(4|6), gtpu(4|6)e,
> gtpu(4|6)u, and gtpu(4|6)d.

This is for tunneling, right? IDK much about GTP but we don't have flow
types for other tunneling protos. What makes this one special?
  
Takeru Hayasaka Oct. 11, 2023, 1:56 a.m. UTC | #2
Hi Jakub san
Thank you for your reply!

GTP generates a flow that includes an ID called TEID to identify the
tunnel. This tunnel is created for each UE (User Equipment).
By performing RSS based on this flow, it is possible to apply RSS for
each communication unit from the UE.
Without this, RSS would only be effective within the range of IP addresses.
For instance, the PGW can only perform RSS within the IP range of the SGW.
What I'm trying to say is that RSS based solely on IP addresses can be
problematic from a load distribution perspective, especially if
there's a bias in the terminals connected to a particular base
station.
As a reference that discusses a similar topic, please see the link
below(is not RSS, is TEID Flow):
https://docs.nvidia.com/networking-ethernet-software/cumulus-linux-56/Layer-3/Routing/Equal-Cost-Multipath-Load-Sharing/#gtp-hashing

Thank you for your understanding.
---
Sorry! My email was blocked because it wasn't sent in plain text mode.
I've made the necessary changes and will resend it.

2023年10月11日(水) 4:32 Jakub Kicinski <kuba@kernel.org>:
>
> On Sun,  8 Oct 2023 07:52:22 +0000 Takeru Hayasaka wrote:
> > This is a patch that enables RSS functionality for GTP packets using
> > ethtool.
> > A user can include her TEID and make RSS work for GTP-U over IPv4 by
> > doing the following:
> > `ethtool -N ens3 rx-flow-hash gtpu4 sd`
> > In addition to gtpu(4|6), we now support gtpc(4|6), gtpu(4|6)e,
> > gtpu(4|6)u, and gtpu(4|6)d.
>
> This is for tunneling, right? IDK much about GTP but we don't have flow
> types for other tunneling protos. What makes this one special?
  
Jakub Kicinski Oct. 11, 2023, 2:10 a.m. UTC | #3
On Wed, 11 Oct 2023 10:56:17 +0900 takeru hayasaka wrote:
> GTP generates a flow that includes an ID called TEID to identify the
> tunnel. This tunnel is created for each UE (User Equipment).
> By performing RSS based on this flow, it is possible to apply RSS for
> each communication unit from the UE.
> Without this, RSS would only be effective within the range of IP addresses.
> For instance, the PGW can only perform RSS within the IP range of the SGW.
> What I'm trying to say is that RSS based solely on IP addresses can be
> problematic from a load distribution perspective, especially if
> there's a bias in the terminals connected to a particular base
> station.
> As a reference that discusses a similar topic, please see the link
> below(is not RSS, is TEID Flow):
> https://docs.nvidia.com/networking-ethernet-software/cumulus-linux-56/Layer-3/Routing/Equal-Cost-Multipath-Load-Sharing/#gtp-hashing

Makes sense, thanks for the extra information. I think it would be
worth adding all of this to the commit message!

Regarding the patch - you are only adding flow types, not a new field 
(which are defined as RXH_*). If we want to hash on an extra field, 
I think we need to specify that field as well?

> Thank you for your understanding.
> ---
> Sorry! My email was blocked because it wasn't sent in plain text mode.
> I've made the necessary changes and will resend it.

No worries! Additional request - in the future please prefer the
bottom-posting or interleaved style of replies:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
  
Takeru Hayasaka Oct. 11, 2023, 5:25 a.m. UTC | #4
Thank you for teaching me so much!

> > Sorry! My email was blocked because it wasn't sent in plain text mode.
> > I've made the necessary changes and will resend it.
I got it. I'll do it like that :)
Please let me know if I'm using it incorrectly.

> Makes sense, thanks for the extra information. I think it would be
> worth adding all of this to the commit message!

of course:)
I got it. I will add this background information when I issue a patch
for the next version.

> Regarding the patch - you are only adding flow types, not a new field
> (which are defined as RXH_*). If we want to hash on an extra field,
> I think we need to specify that field as well?

I've been really struggling with this...
When I read the Intel ICE documentation, it suggests that in RSS, TEID
can be an additional input.
However, I couldn't think of a reason not to include TEID when
enabling RSS for GTP cases.

https://www.intel.com/content/www/us/en/content-details/617015/intel-ethernet-controller-e810-dynamic-device-personalization-ddp-technology-guide.html
(cf. Table 8. Patterns and Input Sets for iavf RSS)

However, for Flow Director, it's clear that you'd want to include the
TEID field. But since I found that someone from Intel has already
configured it to use TEID with Flow Director, I thought maybe we don't
need to add the TEID parameter for now.

https://patchwork.ozlabs.org/project/intel-wired-lan/cover/20210126065206.137422-1-haiyue.wang@intel.com/

If we want to include something other than TEID (e.g., QFI) in Flow
Director, I think it would be better to prepare a new field.

2023年10月11日(水) 11:10 Jakub Kicinski <kuba@kernel.org>:
>
> On Wed, 11 Oct 2023 10:56:17 +0900 takeru hayasaka wrote:
> > GTP generates a flow that includes an ID called TEID to identify the
> > tunnel. This tunnel is created for each UE (User Equipment).
> > By performing RSS based on this flow, it is possible to apply RSS for
> > each communication unit from the UE.
> > Without this, RSS would only be effective within the range of IP addresses.
> > For instance, the PGW can only perform RSS within the IP range of the SGW.
> > What I'm trying to say is that RSS based solely on IP addresses can be
> > problematic from a load distribution perspective, especially if
> > there's a bias in the terminals connected to a particular base
> > station.
> > As a reference that discusses a similar topic, please see the link
> > below(is not RSS, is TEID Flow):
> > https://docs.nvidia.com/networking-ethernet-software/cumulus-linux-56/Layer-3/Routing/Equal-Cost-Multipath-Load-Sharing/#gtp-hashing
>
> Makes sense, thanks for the extra information. I think it would be
> worth adding all of this to the commit message!
>
> Regarding the patch - you are only adding flow types, not a new field
> (which are defined as RXH_*). If we want to hash on an extra field,
> I think we need to specify that field as well?
>
> > Thank you for your understanding.
> > ---
> > Sorry! My email was blocked because it wasn't sent in plain text mode.
> > I've made the necessary changes and will resend it.
>
> No worries! Additional request - in the future please prefer the
> bottom-posting or interleaved style of replies:
> https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
  
Jakub Kicinski Oct. 11, 2023, 4:41 p.m. UTC | #5
On Wed, 11 Oct 2023 14:25:55 +0900 takeru hayasaka wrote:
> > Regarding the patch - you are only adding flow types, not a new field
> > (which are defined as RXH_*). If we want to hash on an extra field,
> > I think we need to specify that field as well?  
> 
> I've been really struggling with this...
> When I read the Intel ICE documentation, it suggests that in RSS, TEID
> can be an additional input.
> However, I couldn't think of a reason not to include TEID when
> enabling RSS for GTP cases.
> 
> https://www.intel.com/content/www/us/en/content-details/617015/intel-ethernet-controller-e810-dynamic-device-personalization-ddp-technology-guide.html
> (cf. Table 8. Patterns and Input Sets for iavf RSS)
> 
> However, for Flow Director, it's clear that you'd want to include the
> TEID field. But since I found that someone from Intel has already
> configured it to use TEID with Flow Director, I thought maybe we don't
> need to add the TEID parameter for now.
> 
> https://patchwork.ozlabs.org/project/intel-wired-lan/cover/20210126065206.137422-1-haiyue.wang@intel.com/
> 
> If we want to include something other than TEID (e.g., QFI) in Flow
> Director, I think it would be better to prepare a new field.

I think we should expose TEID as a field. It's easier to understand 
the API if fields are all listed, and not implied by the flow hash.
  
Takeru Hayasaka Oct. 12, 2023, 1:08 a.m. UTC | #6
> I think we should expose TEID as a field. It's easier to understand
> the API if fields are all listed, and not implied by the flow hash.

Jakub san Thank you for your feedback!
I would like to make changes and send them as a patch for the next version.

2023年10月12日(木) 1:41 Jakub Kicinski <kuba@kernel.org>:
>
> On Wed, 11 Oct 2023 14:25:55 +0900 takeru hayasaka wrote:
> > > Regarding the patch - you are only adding flow types, not a new field
> > > (which are defined as RXH_*). If we want to hash on an extra field,
> > > I think we need to specify that field as well?
> >
> > I've been really struggling with this...
> > When I read the Intel ICE documentation, it suggests that in RSS, TEID
> > can be an additional input.
> > However, I couldn't think of a reason not to include TEID when
> > enabling RSS for GTP cases.
> >
> > https://www.intel.com/content/www/us/en/content-details/617015/intel-ethernet-controller-e810-dynamic-device-personalization-ddp-technology-guide.html
> > (cf. Table 8. Patterns and Input Sets for iavf RSS)
> >
> > However, for Flow Director, it's clear that you'd want to include the
> > TEID field. But since I found that someone from Intel has already
> > configured it to use TEID with Flow Director, I thought maybe we don't
> > need to add the TEID parameter for now.
> >
> > https://patchwork.ozlabs.org/project/intel-wired-lan/cover/20210126065206.137422-1-haiyue.wang@intel.com/
> >
> > If we want to include something other than TEID (e.g., QFI) in Flow
> > Director, I think it would be better to prepare a new field.
>
> I think we should expose TEID as a field. It's easier to understand
> the API if fields are all listed, and not implied by the flow hash.
  

Patch

diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index ad4d4702129f..a5ee6638bc60 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2404,6 +2404,21 @@  static u32 ice_parse_hdrs(struct ethtool_rxnfc *nfc)
 	case SCTP_V4_FLOW:
 		hdrs |= ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4;
 		break;
+	case GTPU_V4_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_IPV4;
+		break;
+	case GTPC_V4_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPC_TEID | ICE_FLOW_SEG_HDR_IPV4;
+		break;
+	case GTPU_EH_V4_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_IPV4;
+		break;
+	case GTPU_UL_V4_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_UP | ICE_FLOW_SEG_HDR_IPV4;
+		break;
+	case GTPU_DL_V4_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_DWN | ICE_FLOW_SEG_HDR_IPV4;
+		break;
 	case TCP_V6_FLOW:
 		hdrs |= ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6;
 		break;
@@ -2413,9 +2428,25 @@  static u32 ice_parse_hdrs(struct ethtool_rxnfc *nfc)
 	case SCTP_V6_FLOW:
 		hdrs |= ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6;
 		break;
+	case GTPU_V6_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_IPV6;
+		break;
+	case GTPC_V6_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPC_TEID | ICE_FLOW_SEG_HDR_IPV6;
+		break;
+	case GTPU_EH_V6_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_IPV6;
+		break;
+	case GTPU_UL_V6_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_UP | ICE_FLOW_SEG_HDR_IPV6;
+		break;
+	case GTPU_DL_V6_FLOW:
+		hdrs |= ICE_FLOW_SEG_HDR_GTPU_DWN | ICE_FLOW_SEG_HDR_IPV6;
+		break;
 	default:
 		break;
 	}
+
 	return hdrs;
 }
 
@@ -2431,6 +2462,12 @@  static u32 ice_parse_hdrs(struct ethtool_rxnfc *nfc)
 	BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT)
 #define ICE_FLOW_HASH_FLD_SCTP_DST_PORT	\
 	BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT)
+#define ICE_FLOW_HASH_FLD_GTPC_TEID	BIT_ULL(ICE_FLOW_FIELD_IDX_GTPC_TEID)
+#define ICE_FLOW_HASH_FLD_GTPU_IP_TEID BIT_ULL(ICE_FLOW_FIELD_IDX_GTPU_IP_TEID)
+#define ICE_FLOW_HASH_FLD_GTPU_EH_TEID BIT_ULL(ICE_FLOW_FIELD_IDX_GTPU_EH_TEID)
+#define ICE_FLOW_HASH_FLD_GTPU_EH_QFI BIT_ULL(ICE_FLOW_FIELD_IDX_GTPU_EH_QFI)
+#define ICE_FLOW_HASH_FLD_GTPU_UP_TEID BIT_ULL(ICE_FLOW_FIELD_IDX_GTPU_UP_TEID)
+#define ICE_FLOW_HASH_FLD_GTPU_DWN_TEID BIT_ULL(ICE_FLOW_FIELD_IDX_GTPU_DWN_TEID)
 
 /**
  * ice_parse_hash_flds - parses hash fields from RSS hash input
@@ -2448,6 +2485,11 @@  static u64 ice_parse_hash_flds(struct ethtool_rxnfc *nfc)
 		case TCP_V4_FLOW:
 		case UDP_V4_FLOW:
 		case SCTP_V4_FLOW:
+		case GTPU_V4_FLOW:
+		case GTPC_V4_FLOW:
+		case GTPU_EH_V4_FLOW:
+		case GTPU_UL_V4_FLOW:
+		case GTPU_DL_V4_FLOW:
 			if (nfc->data & RXH_IP_SRC)
 				hfld |= ICE_FLOW_HASH_FLD_IPV4_SA;
 			if (nfc->data & RXH_IP_DST)
@@ -2456,6 +2498,11 @@  static u64 ice_parse_hash_flds(struct ethtool_rxnfc *nfc)
 		case TCP_V6_FLOW:
 		case UDP_V6_FLOW:
 		case SCTP_V6_FLOW:
+		case GTPU_V6_FLOW:
+		case GTPC_V6_FLOW:
+		case GTPU_EH_V6_FLOW:
+		case GTPU_UL_V6_FLOW:
+		case GTPU_DL_V6_FLOW:
 			if (nfc->data & RXH_IP_SRC)
 				hfld |= ICE_FLOW_HASH_FLD_IPV6_SA;
 			if (nfc->data & RXH_IP_DST)
@@ -2489,11 +2536,50 @@  static u64 ice_parse_hash_flds(struct ethtool_rxnfc *nfc)
 			if (nfc->data & RXH_L4_B_2_3)
 				hfld |= ICE_FLOW_HASH_FLD_SCTP_DST_PORT;
 			break;
+		case GTPU_V4_FLOW:
+		case GTPU_V6_FLOW:
+		case GTPU_EH_V4_FLOW:
+		case GTPU_EH_V6_FLOW:
+		case GTPU_UL_V4_FLOW:
+		case GTPU_UL_V6_FLOW:
+		case GTPU_DL_V4_FLOW:
+		case GTPU_DL_V6_FLOW:
+			if (nfc->data & RXH_L4_B_0_1)
+				hfld |= ICE_FLOW_HASH_FLD_TCP_SRC_PORT |
+						ICE_FLOW_HASH_FLD_UDP_SRC_PORT;
+			if (nfc->data & RXH_L4_B_2_3)
+				hfld |= ICE_FLOW_HASH_FLD_TCP_DST_PORT |
+						ICE_FLOW_HASH_FLD_UDP_DST_PORT;
+			break;
 		default:
 			break;
 		}
 	}
 
+	switch (nfc->flow_type) {
+	case GTPC_V4_FLOW:
+	case GTPC_V6_FLOW:
+		hfld |= ICE_FLOW_HASH_FLD_GTPC_TEID;
+		break;
+	case GTPU_V4_FLOW:
+	case GTPU_V6_FLOW:
+		hfld |= ICE_FLOW_HASH_FLD_GTPU_IP_TEID;
+		break;
+	case GTPU_EH_V4_FLOW:
+	case GTPU_EH_V6_FLOW:
+		hfld |= ICE_FLOW_HASH_FLD_GTPU_EH_TEID;
+		break;
+	case GTPU_UL_V4_FLOW:
+	case GTPU_UL_V6_FLOW:
+		hfld |= ICE_FLOW_HASH_FLD_GTPU_UP_TEID;
+		break;
+	case GTPU_DL_V4_FLOW:
+	case GTPU_DL_V6_FLOW:
+		hfld |= ICE_FLOW_HASH_FLD_GTPU_DWN_TEID;
+		break;
+	default:
+		break;
+	}
 	return hfld;
 }
 
@@ -2590,12 +2676,20 @@  ice_get_rss_hash_opt(struct ice_vsi *vsi, struct ethtool_rxnfc *nfc)
 
 	if (hash_flds & ICE_FLOW_HASH_FLD_TCP_SRC_PORT ||
 	    hash_flds & ICE_FLOW_HASH_FLD_UDP_SRC_PORT ||
-	    hash_flds & ICE_FLOW_HASH_FLD_SCTP_SRC_PORT)
+	    hash_flds & ICE_FLOW_HASH_FLD_SCTP_SRC_PORT ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_IP_TEID ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_EH_TEID ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_UP_TEID ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_DWN_TEID)
 		nfc->data |= (u64)RXH_L4_B_0_1;
 
 	if (hash_flds & ICE_FLOW_HASH_FLD_TCP_DST_PORT ||
 	    hash_flds & ICE_FLOW_HASH_FLD_UDP_DST_PORT ||
-	    hash_flds & ICE_FLOW_HASH_FLD_SCTP_DST_PORT)
+	    hash_flds & ICE_FLOW_HASH_FLD_SCTP_DST_PORT ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_IP_TEID ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_EH_TEID ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_UP_TEID ||
+		hash_flds & ICE_FLOW_HASH_FLD_GTPU_DWN_TEID)
 		nfc->data |= (u64)RXH_L4_B_2_3;
 }
 
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.h b/drivers/net/ethernet/intel/ice/ice_flow.h
index b465d27d9b80..7ddf6ce994e7 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.h
+++ b/drivers/net/ethernet/intel/ice/ice_flow.h
@@ -64,6 +64,21 @@ 
 	(ICE_FLOW_HASH_IPV6 | ICE_FLOW_HASH_GTP_U_EH_TEID | \
 	 ICE_FLOW_HASH_GTP_U_EH_QFI)
 
+#define ICE_FLOW_HASH_GTP_U_UP_TEID \
+	(BIT_ULL(ICE_FLOW_FIELD_IDX_GTPU_UP_TEID))
+#define ICE_FLOW_HASH_GTP_U_DWN_TEID \
+	(BIT_ULL(ICE_FLOW_FIELD_IDX_GTPU_DWN_TEID))
+
+#define ICE_FLOW_HASH_GTP_U_IPV4_UP \
+	(ICE_FLOW_HASH_IPV4 | ICE_FLOW_HASH_GTP_U_UP_TEID)
+#define ICE_FLOW_HASH_GTP_U_IPV6_UP \
+	(ICE_FLOW_HASH_IPV6 | ICE_FLOW_HASH_GTP_U_UP_TEID)
+
+#define ICE_FLOW_HASH_GTP_U_IPV4_DWN \
+	(ICE_FLOW_HASH_IPV4 | ICE_FLOW_HASH_GTP_U_DWN_TEID)
+#define ICE_FLOW_HASH_GTP_U_IPV6_DWN \
+	(ICE_FLOW_HASH_IPV6 | ICE_FLOW_HASH_GTP_U_DWN_TEID)
+
 #define ICE_FLOW_HASH_PPPOE_SESS_ID \
 	(BIT_ULL(ICE_FLOW_FIELD_IDX_PPPOE_SESS_ID))
 
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 201570cd2e0b..a9664b48eddb 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1667,6 +1667,41 @@  static void ice_vsi_set_rss_flow_fld(struct ice_vsi *vsi)
 		dev_dbg(dev, "ice_add_rss_cfg failed for sctp4 flow, vsi = %d, error = %d\n",
 			vsi_num, status);
 
+	/* configure RSS for gtpc4 with input set IPv4 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_IPV4_TEID,
+				 ICE_FLOW_SEG_HDR_GTPC_TEID | ICE_FLOW_SEG_HDR_IPV4);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpc4 flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu4 with input set IPv4 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV4_TEID,
+				 ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_IPV4);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu4 flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu4e with input set IPv4 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV4_EH,
+				 ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_IPV4);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu4e flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu4u with input set IPv4 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV4_UP,
+				 ICE_FLOW_SEG_HDR_GTPU_UP | ICE_FLOW_SEG_HDR_IPV4);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu4u flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu4d with input set IPv4 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV4_DWN,
+				 ICE_FLOW_SEG_HDR_GTPU_DWN | ICE_FLOW_SEG_HDR_IPV4);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu4d flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
 	/* configure RSS for tcp6 with input set IPv6 src/dst, TCP src/dst */
 	status = ice_add_rss_cfg(hw, vsi_handle, ICE_HASH_TCP_IPV6,
 				 ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6);
@@ -1688,6 +1723,41 @@  static void ice_vsi_set_rss_flow_fld(struct ice_vsi *vsi)
 		dev_dbg(dev, "ice_add_rss_cfg failed for sctp6 flow, vsi = %d, error = %d\n",
 			vsi_num, status);
 
+	/* configure RSS for gtpc6 with input set IPv6 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_IPV6_TEID,
+				 ICE_FLOW_SEG_HDR_GTPC_TEID | ICE_FLOW_SEG_HDR_IPV6);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpc6 flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu6 with input set IPv6 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV6_TEID,
+				 ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_IPV6);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu4 flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu6e with input set IPv6 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV6_EH,
+				 ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_IPV6);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu6e flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu6u with input set IPv6 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV6_UP,
+				 ICE_FLOW_SEG_HDR_GTPU_UP | ICE_FLOW_SEG_HDR_IPV6);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu6u flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
+	/* configure RSS for gtpu6d with input set IPv6 src/dst */
+	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_GTP_U_IPV6_DWN,
+				 ICE_FLOW_SEG_HDR_GTPU_DWN | ICE_FLOW_SEG_HDR_IPV6);
+	if (status)
+		dev_dbg(dev, "ice_add_rss_cfg failed for gtpu4d flow, vsi = %d, error = %d\n",
+			vsi_num, status);
+
 	status = ice_add_rss_cfg(hw, vsi_handle, ICE_FLOW_HASH_ESP_SPI,
 				 ICE_FLOW_SEG_HDR_ESP);
 	if (status)
diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h
index f7fba0dc87e5..f3af2a78f7dd 100644
--- a/include/uapi/linux/ethtool.h
+++ b/include/uapi/linux/ethtool.h
@@ -2011,6 +2011,16 @@  static inline int ethtool_validate_duplex(__u8 duplex)
 #define	IPV4_FLOW	0x10	/* hash only */
 #define	IPV6_FLOW	0x11	/* hash only */
 #define	ETHER_FLOW	0x12	/* spec only (ether_spec) */
+#define GTPU_V4_FLOW 0x13	/* hash only */
+#define GTPU_V6_FLOW 0x14	/* hash only */
+#define GTPC_V4_FLOW 0x15	/* hash only */
+#define GTPC_V6_FLOW 0x16	/* hash only */
+#define GTPU_EH_V4_FLOW 0x17	/* hash only */
+#define GTPU_EH_V6_FLOW 0x18	/* hash only */
+#define GTPU_UL_V4_FLOW 0x19	/* hash only */
+#define GTPU_UL_V6_FLOW 0x20	/* hash only */
+#define GTPU_DL_V4_FLOW 0x21	/* hash only */
+#define GTPU_DL_V6_FLOW 0x22	/* hash only */
 /* Flag to enable additional fields in struct ethtool_rx_flow_spec */
 #define	FLOW_EXT	0x80000000
 #define	FLOW_MAC_EXT	0x40000000