Message ID | 20230215061035.1534950-1-qiang1.zhang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp26818wrn; Tue, 14 Feb 2023 22:15:40 -0800 (PST) X-Google-Smtp-Source: AK7set+t4pr9Nuave80s6+u7t5WHtNk1IPdFERQhwQGKPh1UMT1XFUFa8/hmbYTcnSrv8u4sVmHD X-Received: by 2002:a17:902:f68b:b0:196:64bf:ed86 with SMTP id l11-20020a170902f68b00b0019664bfed86mr1672673plg.62.1676441740291; Tue, 14 Feb 2023 22:15:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676441740; cv=none; d=google.com; s=arc-20160816; b=kfN5QYkMgQmBLK0obQEqWcf98gag/+t8Kb28IzxOcYYC4BSwgcbkHV2c/Za3e6oShd O6iun8SHQ/9VaM5QAIufZ6JgeH3m9KMa4u+OR6e5d1TVcMduMTWG3jk5mJ3ExIh+38ZI Z51eU+aJW++uCkzcB0XkDaw4X8NB6GyVOiDIcZZ3748boqQlPEs3hh3dI6buJPaP+IqY S+b8Nq3IsYrMdCNciD/ON21NTzNk/5ClEz/Myg6IQQuaTj7h3Jtvx37B80d8btv/IqFJ +Mtlh/yX0epljdNyp9CC7PjKJBHiQ16cCHYeoJhvhLp+ZaL9MEAsoe2oW7eVNCjB+C5W WMDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=oTZS4WeIWb/RathbxH55e83cA4OAfRB20f/QBncTD0o=; b=lp/AJz3g0Qj8psTvZa5rQy+PUPZyFQcBi3B/Iclkiw+MRMyPUR4WM7vKob2Um8Rv2b CNeZ4WDsDBJXTISLIezQ4ZuQ4nK95fvQLppz7KZsT1Bqbasvflbpk3f18adPl7KYK4mS xr29qtb0UmItBh9QP9mUMnTs+rxpSeHfNyId9fEKLcDRcsWfV8kB2Oxn4+hibLcCY6nm 4zUmtxR+uqZTSf40inr+S3bLPqEnyCvnPP0zRhA+eT07T6ErNB/cr8+Ynj9te4EQQM5C oDlohMQ9EUS8xF3KS3jMnhyCKOUy9eqUU5yAarY6pbo9lvZgL3NjAGkVzdL52unf35Ka CGsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AQwm1Q7w; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o24-20020a170902779800b001931fce6511si10028225pll.374.2023.02.14.22.15.26; Tue, 14 Feb 2023 22:15:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AQwm1Q7w; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232124AbjBOGFl (ORCPT <rfc822;tebrre53rla2o@gmail.com> + 99 others); Wed, 15 Feb 2023 01:05:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbjBOGFj (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 15 Feb 2023 01:05:39 -0500 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3133A7AAD for <linux-kernel@vger.kernel.org>; Tue, 14 Feb 2023 22:05:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676441138; x=1707977138; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=bFqPJSqonIBo3Z3voAM+KmohKUwQr9Pn+pbWpPvn+dg=; b=AQwm1Q7w7ef56AoJSbCQ7bdoWmlaBUKsHsj2iVwH0k1sF+52GJqqQ2S0 8xSaC4UTuNbVwaFd1VXNmXtXtzj71nxYSGKfif2xUqxmx/ii43RbnXqqL M+p1mTCarTGOim8Wz/28F5GojXcF2lPKX3wPk964s1fIUo3k4FKqHPzab +6ogzm410vqjPj2Ulk5QaBDymtyXAicfiu6sobptpuKh+l3fOX+93vcLr /1+w398c+wpUhNPa5iVFpP1xk7xOTIZ6lbI5t0p0axJjkwYEDigjVAjBO 0ibCdV6vSBVv6hNy+yMdvMhLBKOs5bI5AO151tMvF4ufb/TpnwGStn8/Z w==; X-IronPort-AV: E=McAfee;i="6500,9779,10621"; a="311720966" X-IronPort-AV: E=Sophos;i="5.97,298,1669104000"; d="scan'208";a="311720966" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Feb 2023 22:05:37 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10621"; a="733153530" X-IronPort-AV: E=Sophos;i="5.97,298,1669104000"; d="scan'208";a="733153530" Received: from zq-optiplex-7090.bj.intel.com ([10.238.156.129]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Feb 2023 22:05:36 -0800 From: Zqiang <qiang1.zhang@intel.com> To: dave@stgolabs.net, paulmck@kernel.org, josh@joshtriplett.org Cc: linux-kernel@vger.kernel.org Subject: [PATCH] locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels Date: Wed, 15 Feb 2023 14:10:35 +0800 Message-Id: <20230215061035.1534950-1-qiang1.zhang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757876574137995657?= X-GMAIL-MSGID: =?utf-8?q?1757876574137995657?= |
Series |
locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels
|
|
Commit Message
Zqiang
Feb. 15, 2023, 6:10 a.m. UTC
For PREEMPT_RT kernel, the spin_lock, spin_lock_irq will converted
to sleepable rt_spin_lock and the interrupt related suffix for
spin_lock/unlock(_irq, irqsave/irqrestore) do not affect CPU's
interrupt state. this commit therefore add raw_spin_lock torture
tests, this is a strict spin lock implementation in RT kernels.
Signed-off-by: Zqiang <qiang1.zhang@intel.com>
---
kernel/locking/locktorture.c | 58 ++++++++++++++++++++++++++++++++++++
1 file changed, 58 insertions(+)
Comments
On Wed, Feb 15, 2023 at 02:10:35PM +0800, Zqiang wrote: > For PREEMPT_RT kernel, the spin_lock, spin_lock_irq will converted > to sleepable rt_spin_lock and the interrupt related suffix for > spin_lock/unlock(_irq, irqsave/irqrestore) do not affect CPU's > interrupt state. this commit therefore add raw_spin_lock torture > tests, this is a strict spin lock implementation in RT kernels. > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> A nice addition! Is this something you will be testing regularly? If not, should there be additional locktorture scenarios, perhaps prefixed by "RT-" to hint that they are not normally available? Or did you have some other plan for making use of these? Thanx, Paul > --- > kernel/locking/locktorture.c | 58 ++++++++++++++++++++++++++++++++++++ > 1 file changed, 58 insertions(+) > > diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c > index 9425aff08936..521197366f27 100644 > --- a/kernel/locking/locktorture.c > +++ b/kernel/locking/locktorture.c > @@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = { > .name = "spin_lock_irq" > }; > > +#ifdef CONFIG_PREEMPT_RT > +static DEFINE_RAW_SPINLOCK(torture_raw_spinlock); > + > +static int torture_raw_spin_lock_write_lock(int tid __maybe_unused) > +__acquires(torture_raw_spinlock) > +{ > + raw_spin_lock(&torture_raw_spinlock); > + return 0; > +} > + > +static void torture_raw_spin_lock_write_unlock(int tid __maybe_unused) > +__releases(torture_raw_spinlock) > +{ > + raw_spin_unlock(&torture_raw_spinlock); > +} > + > +static struct lock_torture_ops raw_spin_lock_ops = { > + .writelock = torture_raw_spin_lock_write_lock, > + .write_delay = torture_spin_lock_write_delay, > + .task_boost = torture_rt_boost, > + .writeunlock = torture_raw_spin_lock_write_unlock, > + .readlock = NULL, > + .read_delay = NULL, > + .readunlock = NULL, > + .name = "raw_spin_lock" > +}; > + > +static int torture_raw_spin_lock_write_lock_irq(int tid __maybe_unused) > +__acquires(torture_raw_spinlock) > +{ > + unsigned long flags; > + > + raw_spin_lock_irqsave(&torture_raw_spinlock, flags); > + cxt.cur_ops->flags = flags; > + return 0; > +} > + > +static void torture_raw_spin_lock_write_unlock_irq(int tid __maybe_unused) > +__releases(torture_raw_spinlock) > +{ > + raw_spin_unlock_irqrestore(&torture_raw_spinlock, cxt.cur_ops->flags); > +} > + > +static struct lock_torture_ops raw_spin_lock_irq_ops = { > + .writelock = torture_raw_spin_lock_write_lock_irq, > + .write_delay = torture_spin_lock_write_delay, > + .task_boost = torture_rt_boost, > + .writeunlock = torture_raw_spin_lock_write_unlock_irq, > + .readlock = NULL, > + .read_delay = NULL, > + .readunlock = NULL, > + .name = "raw_spin_lock_irq" > +}; > +#endif > + > static DEFINE_RWLOCK(torture_rwlock); > > static int torture_rwlock_write_lock(int tid __maybe_unused) > @@ -1017,6 +1072,9 @@ static int __init lock_torture_init(void) > static struct lock_torture_ops *torture_ops[] = { > &lock_busted_ops, > &spin_lock_ops, &spin_lock_irq_ops, > +#ifdef CONFIG_PREEMPT_RT > + &raw_spin_lock_ops, &raw_spin_lock_irq_ops, > +#endif > &rw_lock_ops, &rw_lock_irq_ops, > &mutex_lock_ops, > &ww_mutex_lock_ops, > -- > 2.25.1 >
>On Wed, Feb 15, 2023 at 02:10:35PM +0800, Zqiang wrote: > For PREEMPT_RT kernel, the spin_lock, spin_lock_irq will converted > to sleepable rt_spin_lock and the interrupt related suffix for > spin_lock/unlock(_irq, irqsave/irqrestore) do not affect CPU's > interrupt state. this commit therefore add raw_spin_lock torture > tests, this is a strict spin lock implementation in RT kernels. > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > >A nice addition! Is this something you will be testing regularly? >If not, should there be additional locktorture scenarios, perhaps prefixed >by "RT-" to hint that they are not normally available? > >Or did you have some other plan for making use of these? Hi Paul Thanks for reply, in fact, I want to enrich the test of locktorture, after all, under the PREEMPT_RT kernel, we lost the test of the real spin lock. Thanks Zqiang > > Thanx, Paul > > --- > kernel/locking/locktorture.c | 58 ++++++++++++++++++++++++++++++++++++ > 1 file changed, 58 insertions(+) > > diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c > index 9425aff08936..521197366f27 100644 > --- a/kernel/locking/locktorture.c > +++ b/kernel/locking/locktorture.c > @@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = { > .name = "spin_lock_irq" > }; > > +#ifdef CONFIG_PREEMPT_RT > +static DEFINE_RAW_SPINLOCK(torture_raw_spinlock); > + > +static int torture_raw_spin_lock_write_lock(int tid __maybe_unused) > +__acquires(torture_raw_spinlock) > +{ > + raw_spin_lock(&torture_raw_spinlock); > + return 0; > +} > + > +static void torture_raw_spin_lock_write_unlock(int tid __maybe_unused) > +__releases(torture_raw_spinlock) > +{ > + raw_spin_unlock(&torture_raw_spinlock); > +} > + > +static struct lock_torture_ops raw_spin_lock_ops = { > + .writelock = torture_raw_spin_lock_write_lock, > + .write_delay = torture_spin_lock_write_delay, > + .task_boost = torture_rt_boost, > + .writeunlock = torture_raw_spin_lock_write_unlock, > + .readlock = NULL, > + .read_delay = NULL, > + .readunlock = NULL, > + .name = "raw_spin_lock" > +}; > + > +static int torture_raw_spin_lock_write_lock_irq(int tid __maybe_unused) > +__acquires(torture_raw_spinlock) > +{ > + unsigned long flags; > + > + raw_spin_lock_irqsave(&torture_raw_spinlock, flags); > + cxt.cur_ops->flags = flags; > + return 0; > +} > + > +static void torture_raw_spin_lock_write_unlock_irq(int tid __maybe_unused) > +__releases(torture_raw_spinlock) > +{ > + raw_spin_unlock_irqrestore(&torture_raw_spinlock, cxt.cur_ops->flags); > +} > + > +static struct lock_torture_ops raw_spin_lock_irq_ops = { > + .writelock = torture_raw_spin_lock_write_lock_irq, > + .write_delay = torture_spin_lock_write_delay, > + .task_boost = torture_rt_boost, > + .writeunlock = torture_raw_spin_lock_write_unlock_irq, > + .readlock = NULL, > + .read_delay = NULL, > + .readunlock = NULL, > + .name = "raw_spin_lock_irq" > +}; > +#endif > + > static DEFINE_RWLOCK(torture_rwlock); > > static int torture_rwlock_write_lock(int tid __maybe_unused) > @@ -1017,6 +1072,9 @@ static int __init lock_torture_init(void) > static struct lock_torture_ops *torture_ops[] = { > &lock_busted_ops, > &spin_lock_ops, &spin_lock_irq_ops, > +#ifdef CONFIG_PREEMPT_RT > + &raw_spin_lock_ops, &raw_spin_lock_irq_ops, > +#endif > &rw_lock_ops, &rw_lock_irq_ops, > &mutex_lock_ops, > &ww_mutex_lock_ops, > -- > 2.25.1 >
On Sun, Feb 19, 2023 at 05:04:41AM +0000, Zhang, Qiang1 wrote: > > >On Wed, Feb 15, 2023 at 02:10:35PM +0800, Zqiang wrote: > > For PREEMPT_RT kernel, the spin_lock, spin_lock_irq will converted > > to sleepable rt_spin_lock and the interrupt related suffix for > > spin_lock/unlock(_irq, irqsave/irqrestore) do not affect CPU's > > interrupt state. this commit therefore add raw_spin_lock torture > > tests, this is a strict spin lock implementation in RT kernels. > > > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > > >A nice addition! Is this something you will be testing regularly? > >If not, should there be additional locktorture scenarios, perhaps prefixed > >by "RT-" to hint that they are not normally available? > > > >Or did you have some other plan for making use of these? > > Hi Paul > > Thanks for reply, in fact, I want to enrich the test of locktorture, > after all, under the PREEMPT_RT kernel, we lost the test of the > real spin lock. Very well, how does the following look? Thanx, Paul ------------------------------------------------------------------------ commit edc9d419ee8c22821ffd664466a5cf19208c3f02 Author: Zqiang <qiang1.zhang@intel.com> Date: Wed Feb 15 14:10:35 2023 +0800 locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels In PREEMPT_RT kernels, both spin_lock() and spin_lock_irq() are converted to sleepable rt_spin_lock(). This means that the interrupt related suffix for spin_lock/unlock(_irq, irqsave/irqrestore) do not affect the CPU's interrupt state. This commit therefore adds raw spin-lock torture tests. This in turn permits pure spin locks to be tested in PREEMPT_RT kernels. Signed-off-by: Zqiang <qiang1.zhang@intel.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c index 9425aff089365..ed8e5baafe49f 100644 --- a/kernel/locking/locktorture.c +++ b/kernel/locking/locktorture.c @@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = { .name = "spin_lock_irq" }; +#ifdef CONFIG_PREEMPT_RT +static DEFINE_RAW_SPINLOCK(torture_raw_spinlock); + +static int torture_raw_spin_lock_write_lock(int tid __maybe_unused) +__acquires(torture_raw_spinlock) +{ + raw_spin_lock(&torture_raw_spinlock); + return 0; +} + +static void torture_raw_spin_lock_write_unlock(int tid __maybe_unused) +__releases(torture_raw_spinlock) +{ + raw_spin_unlock(&torture_raw_spinlock); +} + +static struct lock_torture_ops raw_spin_lock_ops = { + .writelock = torture_raw_spin_lock_write_lock, + .write_delay = torture_spin_lock_write_delay, + .task_boost = torture_rt_boost, + .writeunlock = torture_raw_spin_lock_write_unlock, + .readlock = NULL, + .read_delay = NULL, + .readunlock = NULL, + .name = "raw_spin_lock" +}; + +static int torture_raw_spin_lock_write_lock_irq(int tid __maybe_unused) +__acquires(torture_raw_spinlock) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&torture_raw_spinlock, flags); + cxt.cur_ops->flags = flags; + return 0; +} + +static void torture_raw_spin_lock_write_unlock_irq(int tid __maybe_unused) +__releases(torture_raw_spinlock) +{ + raw_spin_unlock_irqrestore(&torture_raw_spinlock, cxt.cur_ops->flags); +} + +static struct lock_torture_ops raw_spin_lock_irq_ops = { + .writelock = torture_raw_spin_lock_write_lock_irq, + .write_delay = torture_spin_lock_write_delay, + .task_boost = torture_rt_boost, + .writeunlock = torture_raw_spin_lock_write_unlock_irq, + .readlock = NULL, + .read_delay = NULL, + .readunlock = NULL, + .name = "raw_spin_lock_irq" +}; +#endif // #ifdef CONFIG_PREEMPT_RT + static DEFINE_RWLOCK(torture_rwlock); static int torture_rwlock_write_lock(int tid __maybe_unused) @@ -1017,6 +1072,9 @@ static int __init lock_torture_init(void) static struct lock_torture_ops *torture_ops[] = { &lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, +#ifdef CONFIG_PREEMPT_RT + &raw_spin_lock_ops, &raw_spin_lock_irq_ops, +#endif // #ifdef CONFIG_PREEMPT_RT &rw_lock_ops, &rw_lock_irq_ops, &mutex_lock_ops, &ww_mutex_lock_ops,
On Wed, 22 Feb 2023, Paul E. McKenney wrote: >commit edc9d419ee8c22821ffd664466a5cf19208c3f02 >Author: Zqiang <qiang1.zhang@intel.com> >Date: Wed Feb 15 14:10:35 2023 +0800 > > locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels > > In PREEMPT_RT kernels, both spin_lock() and spin_lock_irq() are converted > to sleepable rt_spin_lock(). This means that the interrupt related suffix > for spin_lock/unlock(_irq, irqsave/irqrestore) do not affect the CPU's > interrupt state. This commit therefore adds raw spin-lock torture tests. > This in turn permits pure spin locks to be tested in PREEMPT_RT kernels. > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> This is a nice addition, thanks. Just one comment below. >diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c >index 9425aff089365..ed8e5baafe49f 100644 >--- a/kernel/locking/locktorture.c >+++ b/kernel/locking/locktorture.c >@@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = { > .name = "spin_lock_irq" > }; > >+#ifdef CONFIG_PREEMPT_RT >+static DEFINE_RAW_SPINLOCK(torture_raw_spinlock); How about leaving raw spinlocks regardless of preempt-rt, and instead change the default lock (which is spin_lock) based on CONFIG_PREEMPT_RT and use the raw one in that case? Thanks, Davidlohr
On Wed, Feb 22, 2023 at 07:53:59PM -0800, Davidlohr Bueso wrote: > On Wed, 22 Feb 2023, Paul E. McKenney wrote: > > > commit edc9d419ee8c22821ffd664466a5cf19208c3f02 > > Author: Zqiang <qiang1.zhang@intel.com> > > Date: Wed Feb 15 14:10:35 2023 +0800 > > > > locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels > > > > In PREEMPT_RT kernels, both spin_lock() and spin_lock_irq() are converted > > to sleepable rt_spin_lock(). This means that the interrupt related suffix > > for spin_lock/unlock(_irq, irqsave/irqrestore) do not affect the CPU's > > interrupt state. This commit therefore adds raw spin-lock torture tests. > > This in turn permits pure spin locks to be tested in PREEMPT_RT kernels. > > > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > This is a nice addition, thanks. Just one comment below. > > > diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c > > index 9425aff089365..ed8e5baafe49f 100644 > > --- a/kernel/locking/locktorture.c > > +++ b/kernel/locking/locktorture.c > > @@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = { > > .name = "spin_lock_irq" > > }; > > > > +#ifdef CONFIG_PREEMPT_RT > > +static DEFINE_RAW_SPINLOCK(torture_raw_spinlock); > > How about leaving raw spinlocks regardless of preempt-rt, and instead > change the default lock (which is spin_lock) based on CONFIG_PREEMPT_RT > and use the raw one in that case? That makes a lot of sense to me! In fact, I tested this by deleting those #ifdef statements. ;-) Zqiang, would you like to take the patch and make that change, with attribution? Thanx, Paul
> On Wed, 22 Feb 2023, Paul E. McKenney wrote: > > > commit edc9d419ee8c22821ffd664466a5cf19208c3f02 > > Author: Zqiang <qiang1.zhang@intel.com> > > Date: Wed Feb 15 14:10:35 2023 +0800 > > > > locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels > > > > In PREEMPT_RT kernels, both spin_lock() and spin_lock_irq() are converted > > to sleepable rt_spin_lock(). This means that the interrupt related suffix > > for spin_lock/unlock(_irq, irqsave/irqrestore) do not affect the CPU's > > interrupt state. This commit therefore adds raw spin-lock torture tests. > > This in turn permits pure spin locks to be tested in PREEMPT_RT kernels. > > > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > This is a nice addition, thanks. Just one comment below. > > > diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c > > index 9425aff089365..ed8e5baafe49f 100644 > > --- a/kernel/locking/locktorture.c > > +++ b/kernel/locking/locktorture.c > > @@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = { > > .name = "spin_lock_irq" > > }; > > > > +#ifdef CONFIG_PREEMPT_RT > > +static DEFINE_RAW_SPINLOCK(torture_raw_spinlock); > > How about leaving raw spinlocks regardless of preempt-rt, and instead > change the default lock (which is spin_lock) based on CONFIG_PREEMPT_RT > and use the raw one in that case? > >That makes a lot of sense to me! In fact, I tested this by deleting >those #ifdef statements. ;-) > >Zqiang, would you like to take the patch and make that change, with >attribution? If I understand correctly, I should remove #ifdef statements, right? If yes, I will change and resend 😊. Thanks Zqiang > > Thanx, Paul
On Thu, 23 Feb 2023, Zhang, Qiang1 wrote:
>If I understand correctly, I should remove #ifdef statements, right?
Yes, but also please make torture_type default depend on PREEMPT_RT.
Thanks,
Davidlohr
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c index 9425aff08936..521197366f27 100644 --- a/kernel/locking/locktorture.c +++ b/kernel/locking/locktorture.c @@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = { .name = "spin_lock_irq" }; +#ifdef CONFIG_PREEMPT_RT +static DEFINE_RAW_SPINLOCK(torture_raw_spinlock); + +static int torture_raw_spin_lock_write_lock(int tid __maybe_unused) +__acquires(torture_raw_spinlock) +{ + raw_spin_lock(&torture_raw_spinlock); + return 0; +} + +static void torture_raw_spin_lock_write_unlock(int tid __maybe_unused) +__releases(torture_raw_spinlock) +{ + raw_spin_unlock(&torture_raw_spinlock); +} + +static struct lock_torture_ops raw_spin_lock_ops = { + .writelock = torture_raw_spin_lock_write_lock, + .write_delay = torture_spin_lock_write_delay, + .task_boost = torture_rt_boost, + .writeunlock = torture_raw_spin_lock_write_unlock, + .readlock = NULL, + .read_delay = NULL, + .readunlock = NULL, + .name = "raw_spin_lock" +}; + +static int torture_raw_spin_lock_write_lock_irq(int tid __maybe_unused) +__acquires(torture_raw_spinlock) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&torture_raw_spinlock, flags); + cxt.cur_ops->flags = flags; + return 0; +} + +static void torture_raw_spin_lock_write_unlock_irq(int tid __maybe_unused) +__releases(torture_raw_spinlock) +{ + raw_spin_unlock_irqrestore(&torture_raw_spinlock, cxt.cur_ops->flags); +} + +static struct lock_torture_ops raw_spin_lock_irq_ops = { + .writelock = torture_raw_spin_lock_write_lock_irq, + .write_delay = torture_spin_lock_write_delay, + .task_boost = torture_rt_boost, + .writeunlock = torture_raw_spin_lock_write_unlock_irq, + .readlock = NULL, + .read_delay = NULL, + .readunlock = NULL, + .name = "raw_spin_lock_irq" +}; +#endif + static DEFINE_RWLOCK(torture_rwlock); static int torture_rwlock_write_lock(int tid __maybe_unused) @@ -1017,6 +1072,9 @@ static int __init lock_torture_init(void) static struct lock_torture_ops *torture_ops[] = { &lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, +#ifdef CONFIG_PREEMPT_RT + &raw_spin_lock_ops, &raw_spin_lock_irq_ops, +#endif &rw_lock_ops, &rw_lock_irq_ops, &mutex_lock_ops, &ww_mutex_lock_ops,