[net-next,v5,13/14] libie: add per-queue Page Pool stats

Message ID 20231124154732.1623518-14-aleksander.lobakin@intel.com
State New
Headers
Series net: intel: start The Great Code Dedup + Page Pool for iavf |

Commit Message

Alexander Lobakin Nov. 24, 2023, 3:47 p.m. UTC
  Expand the libie generic per-queue stats with the generic Page Pool
stats provided by the API itself, when CONFIG_PAGE_POOL_STATS is
enabled. When it's not, there'll be no such fields in the stats
structure, so no space wasted.
They are also a bit special in terms of how they are obtained. One
&page_pool accumulates statistics until it's destroyed obviously, which
happens on ifdown. So, in order to not lose any statistics, get the
stats and store them in the queue container before destroying the pool.
This container survives ifups/downs, so it basically stores the
statistics accumulated since the very first pool was allocated on this
queue. When it's needed to export the stats, first get the numbers from
this container and then add the "live" numbers -- the ones that the
current active pool returns. The result values will always represent
the actual device-lifetime stats.
There's a cast from &page_pool_stats to `u64 *` in a couple functions,
but they are guarded with stats asserts to make sure it's safe to do.
FWIW it saves a lot of object code.

Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
 drivers/net/ethernet/intel/libie/internal.h | 20 ++++++
 drivers/net/ethernet/intel/libie/rx.c       |  9 +++
 drivers/net/ethernet/intel/libie/stats.c    | 68 +++++++++++++++++++++
 include/linux/net/intel/libie/stats.h       | 34 ++++++++++-
 4 files changed, 130 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/intel/libie/internal.h
  

Comments

Alexander Lobakin Nov. 29, 2023, 1:40 p.m. UTC | #1
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Date: Fri, 24 Nov 2023 16:47:31 +0100

> Expand the libie generic per-queue stats with the generic Page Pool
> stats provided by the API itself, when CONFIG_PAGE_POOL_STATS is
> enabled. When it's not, there'll be no such fields in the stats
> structure, so no space wasted.

Jakub,

Do I get it correctly that after Page Pool Netlink introspection was
merged, this commit makes no sense and we shouln't add PP stats to the
drivers private ones?

> They are also a bit special in terms of how they are obtained. One
> &page_pool accumulates statistics until it's destroyed obviously, which
> happens on ifdown. So, in order to not lose any statistics, get the
> stats and store them in the queue container before destroying the pool.
> This container survives ifups/downs, so it basically stores the
> statistics accumulated since the very first pool was allocated on this
> queue. When it's needed to export the stats, first get the numbers from
> this container and then add the "live" numbers -- the ones that the
> current active pool returns. The result values will always represent
> the actual device-lifetime stats.
> There's a cast from &page_pool_stats to `u64 *` in a couple functions,
> but they are guarded with stats asserts to make sure it's safe to do.
> FWIW it saves a lot of object code.

Thanks,
Olek
  
Jakub Kicinski Nov. 29, 2023, 2:29 p.m. UTC | #2
On Wed, 29 Nov 2023 14:40:33 +0100 Alexander Lobakin wrote:
> > Expand the libie generic per-queue stats with the generic Page Pool
> > stats provided by the API itself, when CONFIG_PAGE_POOL_STATS is
> > enabled. When it's not, there'll be no such fields in the stats
> > structure, so no space wasted.  
> 
> Do I get it correctly that after Page Pool Netlink introspection was
> merged, this commit makes no sense and we shouln't add PP stats to the
> drivers private ones?

Yes, 100%.

FWIW I am aware that better tooling would be good so non-developers
could access to the PP Netlink :(  I'm thinking we should clean up 
YNL lib packaging a little and try to convince iproute2 maintainers 
to accept simple CLI built on top of it.
  
Alexander Lobakin Nov. 30, 2023, 4:01 p.m. UTC | #3
From: Jakub Kicinski <kuba@kernel.org>
Date: Wed, 29 Nov 2023 06:29:14 -0800

> On Wed, 29 Nov 2023 14:40:33 +0100 Alexander Lobakin wrote:
>>> Expand the libie generic per-queue stats with the generic Page Pool
>>> stats provided by the API itself, when CONFIG_PAGE_POOL_STATS is
>>> enabled. When it's not, there'll be no such fields in the stats
>>> structure, so no space wasted.  
>>
>> Do I get it correctly that after Page Pool Netlink introspection was
>> merged, this commit makes no sense and we shouln't add PP stats to the
>> drivers private ones?
> 
> Yes, 100%.

Meh, this way the stats won't survive ifdown/ifup cycles as usually
page_pools get destroyed on ifdown :z
In that patch, I backup the PP stats to a device-lifetime container when
the pool gets destroyed, maybe we could do something similar?

> 
> FWIW I am aware that better tooling would be good so non-developers
> could access to the PP Netlink :(  I'm thinking we should clean up 
> YNL lib packaging a little and try to convince iproute2 maintainers 
> to accept simple CLI built on top of it.

Thanks,
Olek
  
Alexander Lobakin Nov. 30, 2023, 4:45 p.m. UTC | #4
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Date: Thu, 30 Nov 2023 17:01:23 +0100

> From: Jakub Kicinski <kuba@kernel.org>
> Date: Wed, 29 Nov 2023 06:29:14 -0800
> 
>> On Wed, 29 Nov 2023 14:40:33 +0100 Alexander Lobakin wrote:
>>>> Expand the libie generic per-queue stats with the generic Page Pool
>>>> stats provided by the API itself, when CONFIG_PAGE_POOL_STATS is
>>>> enabled. When it's not, there'll be no such fields in the stats
>>>> structure, so no space wasted.  
>>>
>>> Do I get it correctly that after Page Pool Netlink introspection was
>>> merged, this commit makes no sense and we shouln't add PP stats to the
>>> drivers private ones?
>>
>> Yes, 100%.
> 
> Meh, this way the stats won't survive ifdown/ifup cycles as usually
> page_pools get destroyed on ifdown :z
> In that patch, I backup the PP stats to a device-lifetime container when
> the pool gets destroyed, maybe we could do something similar?

I still can pull the PP stats to the driver before destroying it, but
there's no way to tell the PP I have some archived stats for it. Maybe
we could have page_pool_params_slow::get_stats() or smth like this?

> 
>>
>> FWIW I am aware that better tooling would be good so non-developers
>> could access to the PP Netlink :(  I'm thinking we should clean up 
>> YNL lib packaging a little and try to convince iproute2 maintainers 
>> to accept simple CLI built on top of it.

Thanks,
Olek
  
Jakub Kicinski Dec. 1, 2023, 6:55 a.m. UTC | #5
On Thu, 30 Nov 2023 17:45:10 +0100 Alexander Lobakin wrote:
> > Meh, this way the stats won't survive ifdown/ifup cycles as usually
> > page_pools get destroyed on ifdown :z
> > In that patch, I backup the PP stats to a device-lifetime container when
> > the pool gets destroyed, maybe we could do something similar?  
> 
> I still can pull the PP stats to the driver before destroying it, but
> there's no way to tell the PP I have some archived stats for it. Maybe
> we could have page_pool_params_slow::get_stats() or smth like this?

Why do you think the historic values matter?
User space monitoring will care about incremental values.
It's not like for page pool we need to match the Rx packet count.
  

Patch

diff --git a/drivers/net/ethernet/intel/libie/internal.h b/drivers/net/ethernet/intel/libie/internal.h
new file mode 100644
index 000000000000..13bb0a89f59e
--- /dev/null
+++ b/drivers/net/ethernet/intel/libie/internal.h
@@ -0,0 +1,20 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* libie internal declarations not to be used in the drivers.
+ *
+ * Copyright(c) 2023 Intel Corporation.
+ */
+
+#ifndef __LIBIE_INTERNAL_H
+#define __LIBIE_INTERNAL_H
+
+struct libie_rx_queue;
+
+#ifdef CONFIG_PAGE_POOL_STATS
+void libie_rq_stats_sync_pp(const struct libie_rx_queue *rq);
+#else
+static inline void libie_rq_stats_sync_pp(const struct libie_rx_queue *rq)
+{
+}
+#endif
+
+#endif /* __LIBIE_INTERNAL_H */
diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c
index 520a269f7d31..fcc5c3c44645 100644
--- a/drivers/net/ethernet/intel/libie/rx.c
+++ b/drivers/net/ethernet/intel/libie/rx.c
@@ -3,6 +3,8 @@ 
 
 #include <linux/net/intel/libie/rx.h>
 
+#include "internal.h"
+
 /* Rx buffer management */
 
 /**
@@ -64,9 +66,16 @@  EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE);
 /**
  * libie_rx_page_pool_destroy - destroy a &page_pool created by libie
  * @rq: receive queue to process
+ *
+ * As the stats usually has the same lifetime as the device, but PP is usually
+ * created/destroyed on ifup/ifdown, in order to not lose the stats accumulated
+ * during the last ifup, the PP stats need to be added to the driver stats
+ * container. Then the PP gets destroyed.
  */
 void libie_rx_page_pool_destroy(struct libie_rx_queue *rq)
 {
+	libie_rq_stats_sync_pp(rq);
+
 	page_pool_destroy(rq->pp);
 	rq->pp = NULL;
 }
diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c
index bdcbe4304c55..9c4ef237af08 100644
--- a/drivers/net/ethernet/intel/libie/stats.c
+++ b/drivers/net/ethernet/intel/libie/stats.c
@@ -6,6 +6,8 @@ 
 #include <linux/net/intel/libie/rx.h>
 #include <linux/net/intel/libie/stats.h>
 
+#include "internal.h"
+
 /* Rx per-queue stats */
 
 static const char * const libie_rq_stats_str[] = {
@@ -16,6 +18,70 @@  static const char * const libie_rq_stats_str[] = {
 
 #define LIBIE_RQ_STATS_NUM	ARRAY_SIZE(libie_rq_stats_str)
 
+#ifdef CONFIG_PAGE_POOL_STATS
+/**
+ * libie_rq_stats_get_pp - get the current stats from a &page_pool
+ * @sarr: local array to add stats to
+ * @pool: pool to get the stats from
+ *
+ * Adds the current "live" stats from an online PP to the stats read from
+ * the RQ container, so that the actual totals will be returned.
+ */
+static void libie_rq_stats_get_pp(u64 *sarr, const struct page_pool *pool)
+{
+	struct page_pool_stats *pps;
+	/* Used only to calculate pos below */
+	struct libie_rq_stats tmp;
+	u32 pos;
+
+	/* Validate the libie PP stats array can be casted <-> PP struct */
+	static_assert(sizeof(tmp.pp) == sizeof(*pps));
+
+	if (!pool)
+		return;
+
+	/* Position of the first Page Pool stats field */
+	pos = (u64_stats_t *)&tmp.pp - tmp.raw;
+	pps = (typeof(pps))&sarr[pos];
+
+	page_pool_get_stats(pool, pps);
+}
+
+/**
+ * libie_rq_stats_sync_pp - add the current PP stats to the RQ stats container
+ * @rq: Rx queue to synchronize
+ *
+ * Called by libie_rx_page_pool_destroy() to save the stats before destroying
+ * the pool.
+ */
+void libie_rq_stats_sync_pp(const struct libie_rx_queue *rq)
+{
+	struct libie_rq_stats *stats = rq->stats;
+	struct page_pool_stats pps = { };
+	u64 *sarr = (u64 *)&pps;
+	u64_stats_t *qarr;
+
+	if (!stats)
+		return;
+
+	qarr = (u64_stats_t *)&stats->pp;
+	page_pool_get_stats(rq->pp, &pps);
+
+	u64_stats_update_begin(&stats->syncp);
+
+	for (u32 i = 0; i < sizeof(pps) / sizeof(*sarr); i++)
+		u64_stats_add(&qarr[i], sarr[i]);
+
+	u64_stats_update_end(&stats->syncp);
+}
+#else
+static void libie_rq_stats_get_pp(u64 *sarr, const struct page_pool *pool)
+{
+}
+
+/* static inline void libie_rq_stats_sync_pp() is declared in "internal.h" */
+#endif
+
 /**
  * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided
  *
@@ -57,6 +123,8 @@  void libie_rq_stats_get_data(u64 **data, const struct libie_rx_queue *rq)
 			sarr[i] = u64_stats_read(&stats->raw[i]);
 	} while (u64_stats_fetch_retry(&stats->syncp, start));
 
+	libie_rq_stats_get_pp(sarr, rq->pp);
+
 	for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++)
 		(*data)[i] += sarr[i];
 
diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h
index 4e6dfb8c715f..f913968d7516 100644
--- a/include/linux/net/intel/libie/stats.h
+++ b/include/linux/net/intel/libie/stats.h
@@ -49,6 +49,17 @@ 
  * fragments: number of processed descriptors carrying only a fragment
  * alloc_page_fail: number of Rx page allocation fails
  * build_skb_fail: number of build_skb() fails
+ * pp_alloc_fast: pages taken from the cache or ring
+ * pp_alloc_slow: actual page allocations
+ * pp_alloc_slow_ho: non-order-0 page allocations
+ * pp_alloc_empty: number of times the pool was empty
+ * pp_alloc_refill: number of cache refills
+ * pp_alloc_waive: NUMA node mismatches during recycling
+ * pp_recycle_cached: direct recyclings into the cache
+ * pp_recycle_cache_full: number of times the cache was full
+ * pp_recycle_ring: recyclings into the ring
+ * pp_recycle_ring_full: number of times the ring was full
+ * pp_recycle_released_ref: pages released due to elevated refcnt
  */
 
 #define DECLARE_LIBIE_RQ_NAPI_STATS(act)		\
@@ -60,9 +71,27 @@ 
 	act(alloc_page_fail)				\
 	act(build_skb_fail)
 
+#ifdef CONFIG_PAGE_POOL_STATS
+#define DECLARE_LIBIE_RQ_PP_STATS(act)			\
+	act(pp_alloc_fast)				\
+	act(pp_alloc_slow)				\
+	act(pp_alloc_slow_ho)				\
+	act(pp_alloc_empty)				\
+	act(pp_alloc_refill)				\
+	act(pp_alloc_waive)				\
+	act(pp_recycle_cached)				\
+	act(pp_recycle_cache_full)			\
+	act(pp_recycle_ring)				\
+	act(pp_recycle_ring_full)			\
+	act(pp_recycle_released_ref)
+#else
+#define DECLARE_LIBIE_RQ_PP_STATS(act)
+#endif
+
 #define DECLARE_LIBIE_RQ_STATS(act)			\
 	DECLARE_LIBIE_RQ_NAPI_STATS(act)		\
-	DECLARE_LIBIE_RQ_FAIL_STATS(act)
+	DECLARE_LIBIE_RQ_FAIL_STATS(act)		\
+	DECLARE_LIBIE_RQ_PP_STATS(act)
 
 struct libie_rx_queue;
 
@@ -74,6 +103,9 @@  struct libie_rq_stats {
 #define act(s)	u64_stats_t	s;
 			DECLARE_LIBIE_RQ_NAPI_STATS(act);
 			DECLARE_LIBIE_RQ_FAIL_STATS(act);
+			struct_group(pp,
+				DECLARE_LIBIE_RQ_PP_STATS(act);
+			);
 #undef act
 		};
 		DECLARE_FLEX_ARRAY(u64_stats_t, raw);