[v4,06/19] irqdomain: Drop revmap mutex
Commit Message
The global irq_domain_mutex is now held in all paths that update the
revmap structures so there is no longer any need for the revmap mutex.
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---
include/linux/irqdomain.h | 2 --
kernel/irq/irqdomain.c | 13 ++++++-------
2 files changed, 6 insertions(+), 9 deletions(-)
Comments
On Mon, Jan 16 2023 at 14:50, Johan Hovold wrote:
> The global irq_domain_mutex is now held in all paths that update the
> revmap structures so there is no longer any need for the revmap mutex.
This can also go after the 3rd race fix, but ...
> static void irq_domain_clear_mapping(struct irq_domain *domain,
> irq_hw_number_t hwirq)
> {
> + lockdep_assert_held(&irq_domain_mutex);
these lockdep asserts want to be part of the [dis]association race
fixes. They are completely unrelated to the removal of the revmap_mutex.
Your race fixes change the locking and you want to ensure that all
callers comply right there, no?
Thanks,
tglx
On Tue, Jan 17, 2023 at 10:23:20PM +0100, Thomas Gleixner wrote:
> On Mon, Jan 16 2023 at 14:50, Johan Hovold wrote:
> > The global irq_domain_mutex is now held in all paths that update the
> > revmap structures so there is no longer any need for the revmap mutex.
>
> This can also go after the 3rd race fix, but ...
>
> > static void irq_domain_clear_mapping(struct irq_domain *domain,
> > irq_hw_number_t hwirq)
> > {
> > + lockdep_assert_held(&irq_domain_mutex);
>
> these lockdep asserts want to be part of the [dis]association race
> fixes. They are completely unrelated to the removal of the revmap_mutex.
No, they are very much related to the removal of the revmap_mutex. These
functions deal with the revmap structures which before this patch were
clearly only modified with the revmap_mutex held.
The lockdep assert is here to guarantee that my claim that all current
(and future) paths that end up modifying these structures do so under
the irq_domain_mutex instead.
> Your race fixes change the locking and you want to ensure that all
> callers comply right there, no?
I want to make sure that all callers of these function comply, yes.
That's why the asserts belong in this patch.
Johan
On Wed, Jan 18 2023 at 10:22, Johan Hovold wrote:
> On Tue, Jan 17, 2023 at 10:23:20PM +0100, Thomas Gleixner wrote:
>> On Mon, Jan 16 2023 at 14:50, Johan Hovold wrote:
>> > The global irq_domain_mutex is now held in all paths that update the
>> > revmap structures so there is no longer any need for the revmap mutex.
>>
>> This can also go after the 3rd race fix, but ...
>>
>> > static void irq_domain_clear_mapping(struct irq_domain *domain,
>> > irq_hw_number_t hwirq)
>> > {
>> > + lockdep_assert_held(&irq_domain_mutex);
>>
>> these lockdep asserts want to be part of the [dis]association race
>> fixes. They are completely unrelated to the removal of the revmap_mutex.
>
> No, they are very much related to the removal of the revmap_mutex. These
> functions deal with the revmap structures which before this patch were
> clearly only modified with the revmap_mutex held.
>
> The lockdep assert is here to guarantee that my claim that all current
> (and future) paths that end up modifying these structures do so under
> the irq_domain_mutex instead.
>
>> Your race fixes change the locking and you want to ensure that all
>> callers comply right there, no?
>
> I want to make sure that all callers of these function comply, yes.
> That's why the asserts belong in this patch.
You can do this in a two step approach.
1) Add the new locking and ensure that the lock is held when
the functions are called
2) Safely remove the revmap_mutex because you already established
that revmap is protected by some other means
Thanks,
tglx
On Wed, Jan 18, 2023 at 02:05:29PM +0100, Thomas Gleixner wrote:
> On Wed, Jan 18 2023 at 10:22, Johan Hovold wrote:
> > On Tue, Jan 17, 2023 at 10:23:20PM +0100, Thomas Gleixner wrote:
> >> On Mon, Jan 16 2023 at 14:50, Johan Hovold wrote:
> >> > The global irq_domain_mutex is now held in all paths that update the
> >> > revmap structures so there is no longer any need for the revmap mutex.
> >>
> >> This can also go after the 3rd race fix, but ...
> >>
> >> > static void irq_domain_clear_mapping(struct irq_domain *domain,
> >> > irq_hw_number_t hwirq)
> >> > {
> >> > + lockdep_assert_held(&irq_domain_mutex);
> >>
> >> these lockdep asserts want to be part of the [dis]association race
> >> fixes. They are completely unrelated to the removal of the revmap_mutex.
> >
> > No, they are very much related to the removal of the revmap_mutex. These
> > functions deal with the revmap structures which before this patch were
> > clearly only modified with the revmap_mutex held.
> >
> > The lockdep assert is here to guarantee that my claim that all current
> > (and future) paths that end up modifying these structures do so under
> > the irq_domain_mutex instead.
> >
> >> Your race fixes change the locking and you want to ensure that all
> >> callers comply right there, no?
> >
> > I want to make sure that all callers of these function comply, yes.
> > That's why the asserts belong in this patch.
>
> You can do this in a two step approach.
>
> 1) Add the new locking and ensure that the lock is held when
> the functions are called
But the new locking has nothing to do with these functions. They are
added because they fix various races elsewhere. Adding lockdep
assertions in unrelated function as part of those fixes doesn't really
make much sense.
> 2) Safely remove the revmap_mutex because you already established
> that revmap is protected by some other means
I still think it belongs in this patch.
Johan
On Wed, Jan 18 2023 at 14:10, Johan Hovold wrote:
> On Wed, Jan 18, 2023 at 02:05:29PM +0100, Thomas Gleixner wrote:
>> You can do this in a two step approach.
>>
>> 1) Add the new locking and ensure that the lock is held when
>> the functions are called
>
> But the new locking has nothing to do with these functions. They are
> added because they fix various races elsewhere. Adding lockdep
> assertions in unrelated function as part of those fixes doesn't really
> make much sense.
Seriously? The point is that revmap_mutex is not protecting against any
of the races which you are trying to protect against. Ergo, any callsite
which does not hold the irqdomain mutex is part of the problem you are
trying to solve, no?
The removal of the revmap_mutex becomes possible due to the overall
protection scheme rework _after_ you established that all callers hold
the irqdomain mutex.
Thanks,
tglx
On Mon, Feb 06, 2023 at 02:09:02PM +0100, Thomas Gleixner wrote:
> On Wed, Jan 18 2023 at 14:10, Johan Hovold wrote:
> > On Wed, Jan 18, 2023 at 02:05:29PM +0100, Thomas Gleixner wrote:
> >> You can do this in a two step approach.
> >>
> >> 1) Add the new locking and ensure that the lock is held when
> >> the functions are called
> >
> > But the new locking has nothing to do with these functions. They are
> > added because they fix various races elsewhere. Adding lockdep
> > assertions in unrelated function as part of those fixes doesn't really
> > make much sense.
>
> Seriously? The point is that revmap_mutex is not protecting against any
> of the races which you are trying to protect against. Ergo, any callsite
> which does not hold the irqdomain mutex is part of the problem you are
> trying to solve, no?
The current locking using the revmap_mutex is indeed incomplete as it
only serialises updates of the revmap structures themselves and
specifically does not prevent two mappings for the same interrupt to be
created. It basically just protects the integrity of the revmap tree.
The association and disassocation fixes are not sufficient to address the
mapping race, but those two changes do guarantee that all current paths
that access the revmap structures hold the irq_domain_mutex.
Once that has been established, there is no point in keeping the
revmap_mutex around for the update path (lookups are still protected by
RCU).
But this change is separate from the race that the disassociation fix
addressed (e.g. the racy updates of the domain mapcount) and it is also
independent of the fix for the mapping race (which still exists after
removing the revmap mutex with the current order of the patches).
Therefore the dropping the revmap mutex belongs in its own patch and I
placed it here after the disassociation fix to highlight that it is
indeed independent of the later fixes for the mapping race.
It can be moved after, but I still think the lockdep asserts belong in
the patch that removes the revmap tree lock.
Johan
@@ -143,7 +143,6 @@ struct irq_domain_chip_generic;
* Revmap data, used internally by the irq domain code:
* @revmap_size: Size of the linear map table @revmap[]
* @revmap_tree: Radix map tree for hwirqs that don't fit in the linear map
- * @revmap_mutex: Lock for the revmap
* @revmap: Linear table of irq_data pointers
*/
struct irq_domain {
@@ -171,7 +170,6 @@ struct irq_domain {
irq_hw_number_t hwirq_max;
unsigned int revmap_size;
struct radix_tree_root revmap_tree;
- struct mutex revmap_mutex;
struct irq_data __rcu *revmap[];
};
@@ -214,7 +214,6 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
/* Fill structure */
INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL);
- mutex_init(&domain->revmap_mutex);
domain->ops = ops;
domain->host_data = host_data;
domain->hwirq_max = hwirq_max;
@@ -501,30 +500,30 @@ static bool irq_domain_is_nomap(struct irq_domain *domain)
static void irq_domain_clear_mapping(struct irq_domain *domain,
irq_hw_number_t hwirq)
{
+ lockdep_assert_held(&irq_domain_mutex);
+
if (irq_domain_is_nomap(domain))
return;
- mutex_lock(&domain->revmap_mutex);
if (hwirq < domain->revmap_size)
rcu_assign_pointer(domain->revmap[hwirq], NULL);
else
radix_tree_delete(&domain->revmap_tree, hwirq);
- mutex_unlock(&domain->revmap_mutex);
}
static void irq_domain_set_mapping(struct irq_domain *domain,
irq_hw_number_t hwirq,
struct irq_data *irq_data)
{
+ lockdep_assert_held(&irq_domain_mutex);
+
if (irq_domain_is_nomap(domain))
return;
- mutex_lock(&domain->revmap_mutex);
if (hwirq < domain->revmap_size)
rcu_assign_pointer(domain->revmap[hwirq], irq_data);
else
radix_tree_insert(&domain->revmap_tree, hwirq, irq_data);
- mutex_unlock(&domain->revmap_mutex);
}
static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
@@ -1511,11 +1510,12 @@ static void irq_domain_fix_revmap(struct irq_data *d)
{
void __rcu **slot;
+ lockdep_assert_held(&irq_domain_mutex);
+
if (irq_domain_is_nomap(d->domain))
return;
/* Fix up the revmap. */
- mutex_lock(&d->domain->revmap_mutex);
if (d->hwirq < d->domain->revmap_size) {
/* Not using radix tree */
rcu_assign_pointer(d->domain->revmap[d->hwirq], d);
@@ -1524,7 +1524,6 @@ static void irq_domain_fix_revmap(struct irq_data *d)
if (slot)
radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
}
- mutex_unlock(&d->domain->revmap_mutex);
}
/**