[v3,rcu/dev,2/2] locktorture: Make the rt_boost factor a tunable

Message ID 20221213204839.321027-2-joel@joelfernandes.org
State New
Headers
Series [v3,rcu/dev,1/2] locktorture: Allow non-rtmutex lock types to be boosted |

Commit Message

Joel Fernandes Dec. 13, 2022, 8:48 p.m. UTC
  The rt boosting in locktorture has a factor variable s currently large enough
that boosting only happens once every minute or so. Add a tunable to reduce the
factor so that boosting happens more often, to test paths and arrive at failure
modes earlier. With this change, I can set the factor to like 50 and have the
boosting happens every 10 seconds or so.

Tested with boot parameters:
locktorture.torture_type=mutex_lock
locktorture.onoff_interval=1
locktorture.nwriters_stress=8
locktorture.stutter=0
locktorture.rt_boost=1
locktorture.rt_boost_factor=50
locktorture.nlocks=3

[ Apply Davidlohr Bueso feedback on quoting rt_boost_factor. ]

Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 kernel/locking/locktorture.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)
  

Comments

Paul E. McKenney Dec. 13, 2022, 10:20 p.m. UTC | #1
On Tue, Dec 13, 2022 at 08:48:39PM +0000, Joel Fernandes (Google) wrote:
> The rt boosting in locktorture has a factor variable s currently large enough
> that boosting only happens once every minute or so. Add a tunable to reduce the
> factor so that boosting happens more often, to test paths and arrive at failure
> modes earlier. With this change, I can set the factor to like 50 and have the
> boosting happens every 10 seconds or so.
> 
> Tested with boot parameters:
> locktorture.torture_type=mutex_lock
> locktorture.onoff_interval=1
> locktorture.nwriters_stress=8
> locktorture.stutter=0
> locktorture.rt_boost=1
> locktorture.rt_boost_factor=50
> locktorture.nlocks=3
> 
> [ Apply Davidlohr Bueso feedback on quoting rt_boost_factor. ]
> 
> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>

Queued and pushed both, thank you both!

I am not seeing any evidence of boot parameters being quoted in the
kernel/locking directory, and I don't feel like I should be the one to
be the first to push that convention into kernel/locking, so I left that
change off.  I don't have an opinion either way on it myself, aside from
being more than a bit wary of the churn that would be required to impose
this convention uniformly.

							Thanx, Paul

> ---
>  kernel/locking/locktorture.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
> index e2271e8fc302..87e861da0ad5 100644
> --- a/kernel/locking/locktorture.c
> +++ b/kernel/locking/locktorture.c
> @@ -48,6 +48,7 @@ torture_param(int, stat_interval, 60,
>  torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable");
>  torture_param(int, rt_boost, 2,
>  		"Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types.");
> +torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens.");
>  torture_param(int, verbose, 1,
>  	     "Enable verbose debugging printk()s");
>  
> @@ -131,12 +132,12 @@ static void torture_lock_busted_write_unlock(int tid __maybe_unused)
>  
>  static void __torture_rt_boost(struct torture_random_state *trsp)
>  {
> -	const unsigned int factor = 50000; /* yes, quite arbitrary */
> +	const unsigned int factor = rt_boost_factor;
>  
>  	if (!rt_task(current)) {
>  		/*
> -		 * Boost priority once every ~50k operations. When the
> -		 * task tries to take the lock, the rtmutex it will account
> +		 * Boost priority once every 'rt_boost_factor' operations. When
> +		 * the task tries to take the lock, the rtmutex it will account
>  		 * for the new priority, and do any corresponding pi-dance.
>  		 */
>  		if (trsp && !(torture_random(trsp) %
> @@ -146,8 +147,9 @@ static void __torture_rt_boost(struct torture_random_state *trsp)
>  			return;
>  	} else {
>  		/*
> -		 * The task will remain boosted for another ~500k operations,
> -		 * then restored back to its original prio, and so forth.
> +		 * The task will remain boosted for another 10 * 'rt_boost_factor'
> +		 * operations, then restored back to its original prio, and so
> +		 * forth.
>  		 *
>  		 * When @trsp is nil, we want to force-reset the task for
>  		 * stopping the kthread.
> -- 
> 2.39.0.314.g84b9a713c41-goog
>
  

Patch

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index e2271e8fc302..87e861da0ad5 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -48,6 +48,7 @@  torture_param(int, stat_interval, 60,
 torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable");
 torture_param(int, rt_boost, 2,
 		"Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types.");
+torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens.");
 torture_param(int, verbose, 1,
 	     "Enable verbose debugging printk()s");
 
@@ -131,12 +132,12 @@  static void torture_lock_busted_write_unlock(int tid __maybe_unused)
 
 static void __torture_rt_boost(struct torture_random_state *trsp)
 {
-	const unsigned int factor = 50000; /* yes, quite arbitrary */
+	const unsigned int factor = rt_boost_factor;
 
 	if (!rt_task(current)) {
 		/*
-		 * Boost priority once every ~50k operations. When the
-		 * task tries to take the lock, the rtmutex it will account
+		 * Boost priority once every 'rt_boost_factor' operations. When
+		 * the task tries to take the lock, the rtmutex it will account
 		 * for the new priority, and do any corresponding pi-dance.
 		 */
 		if (trsp && !(torture_random(trsp) %
@@ -146,8 +147,9 @@  static void __torture_rt_boost(struct torture_random_state *trsp)
 			return;
 	} else {
 		/*
-		 * The task will remain boosted for another ~500k operations,
-		 * then restored back to its original prio, and so forth.
+		 * The task will remain boosted for another 10 * 'rt_boost_factor'
+		 * operations, then restored back to its original prio, and so
+		 * forth.
 		 *
 		 * When @trsp is nil, we want to force-reset the task for
 		 * stopping the kthread.