[01/32] perf: Allow a PMU to have a parent

Message ID 20230404134225.13408-2-Jonathan.Cameron@huawei.com
State New
Headers
Series Add parents to struct pmu -> dev |

Commit Message

Jonathan Cameron April 4, 2023, 1:41 p.m. UTC
  Some PMUs have well defined parents such as PCI devices.
As the device_initialize() and device_add() are all within
pmu_dev_alloc() which is called from perf_pmu_register()
there is no opportunity to set the parent from within a driver.

Add a struct device *parent field to struct pmu and use that
to set the parent.

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>

---
Previously posted in CPMU series hence the change log.
v3: No change
---
 include/linux/perf_event.h | 1 +
 kernel/events/core.c       | 1 +
 2 files changed, 2 insertions(+)
  

Comments

Greg KH April 4, 2023, 1:51 p.m. UTC | #1
On Tue, Apr 04, 2023 at 02:41:54PM +0100, Jonathan Cameron wrote:
> Some PMUs have well defined parents such as PCI devices.
> As the device_initialize() and device_add() are all within
> pmu_dev_alloc() which is called from perf_pmu_register()
> there is no opportunity to set the parent from within a driver.
> 
> Add a struct device *parent field to struct pmu and use that
> to set the parent.
> 
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> 

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  
Yicong Yang April 6, 2023, 4:03 a.m. UTC | #2
On 2023/4/4 21:41, Jonathan Cameron wrote:
> Some PMUs have well defined parents such as PCI devices.
> As the device_initialize() and device_add() are all within
> pmu_dev_alloc() which is called from perf_pmu_register()
> there is no opportunity to set the parent from within a driver.
> 
> Add a struct device *parent field to struct pmu and use that
> to set the parent.
> 
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> 
> ---
> Previously posted in CPMU series hence the change log.
> v3: No change
> ---
>  include/linux/perf_event.h | 1 +
>  kernel/events/core.c       | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index d5628a7b5eaa..b99db1eda72c 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -303,6 +303,7 @@ struct pmu {
>  
>  	struct module			*module;
>  	struct device			*dev;
> +	struct device			*parent;
>  	const struct attribute_group	**attr_groups;
>  	const struct attribute_group	**attr_update;
>  	const char			*name;
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index fb3e436bcd4a..a84c282221f2 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -11367,6 +11367,7 @@ static int pmu_dev_alloc(struct pmu *pmu)
>  
>  	dev_set_drvdata(pmu->dev, pmu);
>  	pmu->dev->bus = &pmu_bus;
> +	pmu->dev->parent = pmu->parent;

If there's no parent assigned, is it ok to add some check here? Then we can find it earlier
maybe at develop stage.

Thanks.

>  	pmu->dev->release = pmu_dev_release;
>  
>  	ret = dev_set_name(pmu->dev, "%s", pmu->name);
>
  
Jonathan Cameron April 6, 2023, 10:16 a.m. UTC | #3
On Thu, 6 Apr 2023 12:03:27 +0800
Yicong Yang <yangyicong@huawei.com> wrote:

> On 2023/4/4 21:41, Jonathan Cameron wrote:
> > Some PMUs have well defined parents such as PCI devices.
> > As the device_initialize() and device_add() are all within
> > pmu_dev_alloc() which is called from perf_pmu_register()
> > there is no opportunity to set the parent from within a driver.
> > 
> > Add a struct device *parent field to struct pmu and use that
> > to set the parent.
> > 
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> > 
> > ---
> > Previously posted in CPMU series hence the change log.
> > v3: No change
> > ---
> >  include/linux/perf_event.h | 1 +
> >  kernel/events/core.c       | 1 +
> >  2 files changed, 2 insertions(+)
> > 
> > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> > index d5628a7b5eaa..b99db1eda72c 100644
> > --- a/include/linux/perf_event.h
> > +++ b/include/linux/perf_event.h
> > @@ -303,6 +303,7 @@ struct pmu {
> >  
> >  	struct module			*module;
> >  	struct device			*dev;
> > +	struct device			*parent;
> >  	const struct attribute_group	**attr_groups;
> >  	const struct attribute_group	**attr_update;
> >  	const char			*name;
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index fb3e436bcd4a..a84c282221f2 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -11367,6 +11367,7 @@ static int pmu_dev_alloc(struct pmu *pmu)
> >  
> >  	dev_set_drvdata(pmu->dev, pmu);
> >  	pmu->dev->bus = &pmu_bus;
> > +	pmu->dev->parent = pmu->parent;  
> 
> If there's no parent assigned, is it ok to add some check here? Then we can find it earlier
> maybe at develop stage.

In the long run I agree it would be good.  Short term there are more instances of
struct pmu that don't have parents than those that do (even after this series).
We need to figure out what to do about those before adding checks on it being
set.

Jonathan

> 
> Thanks.
> 
> >  	pmu->dev->release = pmu_dev_release;
> >  
> >  	ret = dev_set_name(pmu->dev, "%s", pmu->name);
> >
  
Peter Zijlstra April 6, 2023, 12:40 p.m. UTC | #4
On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:

> In the long run I agree it would be good.  Short term there are more instances of
> struct pmu that don't have parents than those that do (even after this series).
> We need to figure out what to do about those before adding checks on it being
> set.

Right, I don't think you've touched *any* of the x86 PMUs for example,
and getting everybody that boots an x86 kernel a warning isn't going to
go over well :-)
  
Jonathan Cameron April 6, 2023, 4:44 p.m. UTC | #5
On Thu, 6 Apr 2023 14:40:40 +0200
Peter Zijlstra <peterz@infradead.org> wrote:

> On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
> 
> > In the long run I agree it would be good.  Short term there are more instances of
> > struct pmu that don't have parents than those that do (even after this series).
> > We need to figure out what to do about those before adding checks on it being
> > set.  
> 
> Right, I don't think you've touched *any* of the x86 PMUs for example,
> and getting everybody that boots an x86 kernel a warning isn't going to
> go over well :-)
> 

It was tempting :) "Warning: Parentless PMU: try a different architecture."

I'd love some inputs on what the x86 PMU devices parents should be?
CPU counters in general tend to just spin out of deep in the architecture code.

My overall favorite is an l2 cache related PMU that is spun up in
arch/arm/kernel/irq.c init_IRQ()

I'm just not going to try and figure out why...

Jonathan
  
Greg KH April 6, 2023, 5:08 p.m. UTC | #6
On Thu, Apr 06, 2023 at 05:44:45PM +0100, Jonathan Cameron wrote:
> On Thu, 6 Apr 2023 14:40:40 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
> > 
> > > In the long run I agree it would be good.  Short term there are more instances of
> > > struct pmu that don't have parents than those that do (even after this series).
> > > We need to figure out what to do about those before adding checks on it being
> > > set.  
> > 
> > Right, I don't think you've touched *any* of the x86 PMUs for example,
> > and getting everybody that boots an x86 kernel a warning isn't going to
> > go over well :-)
> > 
> 
> It was tempting :) "Warning: Parentless PMU: try a different architecture."
> 
> I'd love some inputs on what the x86 PMU devices parents should be?
> CPU counters in general tend to just spin out of deep in the architecture code.
> 
> My overall favorite is an l2 cache related PMU that is spun up in
> arch/arm/kernel/irq.c init_IRQ()
> 
> I'm just not going to try and figure out why...

Why not change the api to force a parent to be passed in?  And if one
isn't, we make it a "virtual" device and throw it in the class for them?

thanks,

greg k-h
  
Peter Zijlstra April 6, 2023, 7:49 p.m. UTC | #7
On Thu, Apr 06, 2023 at 05:44:45PM +0100, Jonathan Cameron wrote:
> On Thu, 6 Apr 2023 14:40:40 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
> > 
> > > In the long run I agree it would be good.  Short term there are more instances of
> > > struct pmu that don't have parents than those that do (even after this series).
> > > We need to figure out what to do about those before adding checks on it being
> > > set.  
> > 
> > Right, I don't think you've touched *any* of the x86 PMUs for example,
> > and getting everybody that boots an x86 kernel a warning isn't going to
> > go over well :-)
> > 
> 
> It was tempting :) "Warning: Parentless PMU: try a different architecture."

Haha!

> I'd love some inputs on what the x86 PMU devices parents should be?
> CPU counters in general tend to just spin out of deep in the architecture code.

For the 'simple' ones I suppose we can use the CPU device.

> My overall favorite is an l2 cache related PMU that is spun up in
> arch/arm/kernel/irq.c init_IRQ()

Yeah, we're going to have a ton of them as well. Some of them are PCI
devices and have a clear parent, others, not so much :/
  
Jonathan Cameron April 12, 2023, 9:56 a.m. UTC | #8
On Thu, 6 Apr 2023 19:08:45 +0200
Greg KH <gregkh@linuxfoundation.org> wrote:

> On Thu, Apr 06, 2023 at 05:44:45PM +0100, Jonathan Cameron wrote:
> > On Thu, 6 Apr 2023 14:40:40 +0200
> > Peter Zijlstra <peterz@infradead.org> wrote:
> >   
> > > On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
> > >   
> > > > In the long run I agree it would be good.  Short term there are more instances of
> > > > struct pmu that don't have parents than those that do (even after this series).
> > > > We need to figure out what to do about those before adding checks on it being
> > > > set.    
> > > 
> > > Right, I don't think you've touched *any* of the x86 PMUs for example,
> > > and getting everybody that boots an x86 kernel a warning isn't going to
> > > go over well :-)
> > >   
> > 
> > It was tempting :) "Warning: Parentless PMU: try a different architecture."
> > 
> > I'd love some inputs on what the x86 PMU devices parents should be?
> > CPU counters in general tend to just spin out of deep in the architecture code.
> > 
> > My overall favorite is an l2 cache related PMU that is spun up in
> > arch/arm/kernel/irq.c init_IRQ()
> > 
> > I'm just not going to try and figure out why...  
> 
> Why not change the api to force a parent to be passed in?  And if one
> isn't, we make it a "virtual" device and throw it in the class for them?

Longer term I'd be fine doing that, but I'd like to identify the right parents
rather than end up sweeping it under the carpet.  Anything we either get completely
stuck on (or decide we don't care about) could indeed fall back to a virtual
device.

Jonathan


> 
> thanks,
> 
> greg k-h
  
Robin Murphy April 12, 2023, 12:41 p.m. UTC | #9
On 2023-04-06 17:44, Jonathan Cameron wrote:
> On Thu, 6 Apr 2023 14:40:40 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
>> On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
>>
>>> In the long run I agree it would be good.  Short term there are more instances of
>>> struct pmu that don't have parents than those that do (even after this series).
>>> We need to figure out what to do about those before adding checks on it being
>>> set.
>>
>> Right, I don't think you've touched *any* of the x86 PMUs for example,
>> and getting everybody that boots an x86 kernel a warning isn't going to
>> go over well :-)
>>
> 
> It was tempting :) "Warning: Parentless PMU: try a different architecture."
> 
> I'd love some inputs on what the x86 PMU devices parents should be?
> CPU counters in general tend to just spin out of deep in the architecture code.
> 
> My overall favorite is an l2 cache related PMU that is spun up in
> arch/arm/kernel/irq.c init_IRQ()
> 
> I'm just not going to try and figure out why...

I think that's simply because the PMU support was hung off the existing 
PL310 configuration code, which still supports non-DT boardfiles. The 
PMU shouldn't strictly need to be registered that early, it would just 
be a bunch more work to ensure that a platform device is available for 
it to bind to as a regular driver model driver, which wasn't justifiable 
at the time.

Thanks,
Robin.
  
Mark Rutland June 6, 2023, 1:06 p.m. UTC | #10
On Thu, Apr 06, 2023 at 09:49:38PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 06, 2023 at 05:44:45PM +0100, Jonathan Cameron wrote:
> > On Thu, 6 Apr 2023 14:40:40 +0200
> > Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > > On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
> > > 
> > > > In the long run I agree it would be good.  Short term there are more instances of
> > > > struct pmu that don't have parents than those that do (even after this series).
> > > > We need to figure out what to do about those before adding checks on it being
> > > > set.  
> > > 
> > > Right, I don't think you've touched *any* of the x86 PMUs for example,
> > > and getting everybody that boots an x86 kernel a warning isn't going to
> > > go over well :-)
> > > 
> > 
> > It was tempting :) "Warning: Parentless PMU: try a different architecture."
> 
> Haha!
> 
> > I'd love some inputs on what the x86 PMU devices parents should be?
> > CPU counters in general tend to just spin out of deep in the architecture code.
> 
> For the 'simple' ones I suppose we can use the CPU device.

Uh, *which* CPU device? Do we have a container device for all CPUs?

> > My overall favorite is an l2 cache related PMU that is spun up in
> > arch/arm/kernel/irq.c init_IRQ()

That's an artifact of the L2 cache controller driver getting initialized there;
ideally we'd have a device for the L2 cache itself (which presumably should
hang off an aggregate CPU device).

> Yeah, we're going to have a ton of them as well. Some of them are PCI
> devices and have a clear parent, others, not so much :/

In a number of places the only thing we have is the PMU driver, and we don't
have a driver (or device) for the HW block it's a part of. Largely that's
interconnect PMUs; we could create container devices there.

Mark.
  
Peter Zijlstra June 6, 2023, 1:18 p.m. UTC | #11
On Tue, Jun 06, 2023 at 02:06:24PM +0100, Mark Rutland wrote:
> On Thu, Apr 06, 2023 at 09:49:38PM +0200, Peter Zijlstra wrote:
> > On Thu, Apr 06, 2023 at 05:44:45PM +0100, Jonathan Cameron wrote:
> > > On Thu, 6 Apr 2023 14:40:40 +0200
> > > Peter Zijlstra <peterz@infradead.org> wrote:
> > > 
> > > > On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
> > > > 
> > > > > In the long run I agree it would be good.  Short term there are more instances of
> > > > > struct pmu that don't have parents than those that do (even after this series).
> > > > > We need to figure out what to do about those before adding checks on it being
> > > > > set.  
> > > > 
> > > > Right, I don't think you've touched *any* of the x86 PMUs for example,
> > > > and getting everybody that boots an x86 kernel a warning isn't going to
> > > > go over well :-)
> > > > 
> > > 
> > > It was tempting :) "Warning: Parentless PMU: try a different architecture."
> > 
> > Haha!
> > 
> > > I'd love some inputs on what the x86 PMU devices parents should be?
> > > CPU counters in general tend to just spin out of deep in the architecture code.
> > 
> > For the 'simple' ones I suppose we can use the CPU device.
> 
> Uh, *which* CPU device? Do we have a container device for all CPUs?

drivers/base/cpu.c:per_cpu(cpu_sys_devices, cpu) for whatever the core
pmu is for that cpu ?

> > > My overall favorite is an l2 cache related PMU that is spun up in
> > > arch/arm/kernel/irq.c init_IRQ()
> 
> That's an artifact of the L2 cache controller driver getting initialized there;
> ideally we'd have a device for the L2 cache itself (which presumably should
> hang off an aggregate CPU device).

/sys/devices/system/cpu/cpuN/cache/indexM

has a struct device somewhere in
drivers/base/cacheinfo.c:ci_index_dev or somesuch.

> > Yeah, we're going to have a ton of them as well. Some of them are PCI
> > devices and have a clear parent, others, not so much :/
> 
> In a number of places the only thing we have is the PMU driver, and we don't
> have a driver (or device) for the HW block it's a part of. Largely that's
> interconnect PMUs; we could create container devices there.

Dont they have a PCI device? But yeah, some are going to be a wee bit
challenging.
  
Mark Rutland June 6, 2023, 1:30 p.m. UTC | #12
On Tue, Jun 06, 2023 at 03:18:59PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 06, 2023 at 02:06:24PM +0100, Mark Rutland wrote:
> > On Thu, Apr 06, 2023 at 09:49:38PM +0200, Peter Zijlstra wrote:
> > > On Thu, Apr 06, 2023 at 05:44:45PM +0100, Jonathan Cameron wrote:
> > > > On Thu, 6 Apr 2023 14:40:40 +0200
> > > > Peter Zijlstra <peterz@infradead.org> wrote:
> > > > 
> > > > > On Thu, Apr 06, 2023 at 11:16:07AM +0100, Jonathan Cameron wrote:
> > > > > 
> > > > > > In the long run I agree it would be good.  Short term there are more instances of
> > > > > > struct pmu that don't have parents than those that do (even after this series).
> > > > > > We need to figure out what to do about those before adding checks on it being
> > > > > > set.  
> > > > > 
> > > > > Right, I don't think you've touched *any* of the x86 PMUs for example,
> > > > > and getting everybody that boots an x86 kernel a warning isn't going to
> > > > > go over well :-)
> > > > > 
> > > > 
> > > > It was tempting :) "Warning: Parentless PMU: try a different architecture."
> > > 
> > > Haha!
> > > 
> > > > I'd love some inputs on what the x86 PMU devices parents should be?
> > > > CPU counters in general tend to just spin out of deep in the architecture code.
> > > 
> > > For the 'simple' ones I suppose we can use the CPU device.
> > 
> > Uh, *which* CPU device? Do we have a container device for all CPUs?
> 
> drivers/base/cpu.c:per_cpu(cpu_sys_devices, cpu) for whatever the core
> pmu is for that cpu ?

... but the struct pmu covers several CPUs, so I don't have a single 'cpu', no?

If I have a system where cpu{0,1,2} are Cortex-A53 and cpu{3,4} are Cortex-A57,
I have two struct pmu instances, each associated with several CPUs. When I
probe each of those I determine a cpumask for each.

> > > > My overall favorite is an l2 cache related PMU that is spun up in
> > > > arch/arm/kernel/irq.c init_IRQ()
> > 
> > That's an artifact of the L2 cache controller driver getting initialized there;
> > ideally we'd have a device for the L2 cache itself (which presumably should
> > hang off an aggregate CPU device).
> 
> /sys/devices/system/cpu/cpuN/cache/indexM
> 
> has a struct device somewhere in
> drivers/base/cacheinfo.c:ci_index_dev or somesuch.

I guess, but I don't think the L2 cache controller (the PL310) is actually tied
to that today.

> > > Yeah, we're going to have a ton of them as well. Some of them are PCI
> > > devices and have a clear parent, others, not so much :/
> > 
> > In a number of places the only thing we have is the PMU driver, and we don't
> > have a driver (or device) for the HW block it's a part of. Largely that's
> > interconnect PMUs; we could create container devices there.
> 
> Dont they have a PCI device? But yeah, some are going to be a wee bit
> challenging.

The system might not even have PCI, so it's arguable that they should just hang
off an MMIO bus (which is effectively what the platofrm bus is).

Thanks,
Mark.
  
Peter Zijlstra June 6, 2023, 1:48 p.m. UTC | #13
On Tue, Jun 06, 2023 at 02:30:52PM +0100, Mark Rutland wrote:

> > > Uh, *which* CPU device? Do we have a container device for all CPUs?
> > 
> > drivers/base/cpu.c:per_cpu(cpu_sys_devices, cpu) for whatever the core
> > pmu is for that cpu ?
> 
> ... but the struct pmu covers several CPUs, so I don't have a single 'cpu', no?
> 
> If I have a system where cpu{0,1,2} are Cortex-A53 and cpu{3,4} are Cortex-A57,
> I have two struct pmu instances, each associated with several CPUs. When I
> probe each of those I determine a cpumask for each.

Bah :/ Clearly I overlooked the disparity there.

> > > > > My overall favorite is an l2 cache related PMU that is spun up in
> > > > > arch/arm/kernel/irq.c init_IRQ()
> > > 
> > > That's an artifact of the L2 cache controller driver getting initialized there;
> > > ideally we'd have a device for the L2 cache itself (which presumably should
> > > hang off an aggregate CPU device).
> > 
> > /sys/devices/system/cpu/cpuN/cache/indexM
> > 
> > has a struct device somewhere in
> > drivers/base/cacheinfo.c:ci_index_dev or somesuch.
> 
> I guess, but I don't think the L2 cache controller (the PL310) is actually tied
> to that today.

All it would do is make fancy links in sysfs I think, who cares ;-)

> > > > Yeah, we're going to have a ton of them as well. Some of them are PCI
> > > > devices and have a clear parent, others, not so much :/
> > > 
> > > In a number of places the only thing we have is the PMU driver, and we don't
> > > have a driver (or device) for the HW block it's a part of. Largely that's
> > > interconnect PMUs; we could create container devices there.
> > 
> > Dont they have a PCI device? But yeah, some are going to be a wee bit
> > challenging.
> 
> The system might not even have PCI, so it's arguable that they should just hang
> off an MMIO bus (which is effectively what the platofrm bus is).

You and your dodgy platforms :-)
  
Robin Murphy June 7, 2023, 11 a.m. UTC | #14
On 2023-06-06 14:48, Peter Zijlstra wrote:
> On Tue, Jun 06, 2023 at 02:30:52PM +0100, Mark Rutland wrote:
> 
>>>> Uh, *which* CPU device? Do we have a container device for all CPUs?
>>>
>>> drivers/base/cpu.c:per_cpu(cpu_sys_devices, cpu) for whatever the core
>>> pmu is for that cpu ?
>>
>> ... but the struct pmu covers several CPUs, so I don't have a single 'cpu', no?
>>
>> If I have a system where cpu{0,1,2} are Cortex-A53 and cpu{3,4} are Cortex-A57,
>> I have two struct pmu instances, each associated with several CPUs. When I
>> probe each of those I determine a cpumask for each.
> 
> Bah :/ Clearly I overlooked the disparity there.
> 
>>>>>> My overall favorite is an l2 cache related PMU that is spun up in
>>>>>> arch/arm/kernel/irq.c init_IRQ()
>>>>
>>>> That's an artifact of the L2 cache controller driver getting initialized there;
>>>> ideally we'd have a device for the L2 cache itself (which presumably should
>>>> hang off an aggregate CPU device).
>>>
>>> /sys/devices/system/cpu/cpuN/cache/indexM
>>>
>>> has a struct device somewhere in
>>> drivers/base/cacheinfo.c:ci_index_dev or somesuch.
>>
>> I guess, but I don't think the L2 cache controller (the PL310) is actually tied
>> to that today.
> 
> All it would do is make fancy links in sysfs I think, who cares ;-)
> 
>>>>> Yeah, we're going to have a ton of them as well. Some of them are PCI
>>>>> devices and have a clear parent, others, not so much :/
>>>>
>>>> In a number of places the only thing we have is the PMU driver, and we don't
>>>> have a driver (or device) for the HW block it's a part of. Largely that's
>>>> interconnect PMUs; we could create container devices there.
>>>
>>> Dont they have a PCI device? But yeah, some are going to be a wee bit
>>> challenging.
>>
>> The system might not even have PCI, so it's arguable that they should just hang
>> off an MMIO bus (which is effectively what the platofrm bus is).
> 
> You and your dodgy platforms :-)

For system PMUs we'll pretty much always have a platform device 
corresponding to a DT/ACPI entry used to describe MMIO registers and/or 
interrupts. In many cases the PMU is going to be the only part of the 
underlying device that is meaningful to Linux anyway, so I don't see any 
issue with just hanging the PMU device off its corresponding platform 
device - it still gives the user a way to map a PMU instance back to 
some understandable system topology (i.e. ACPI/DT) to disambiguate, and 
that's the most important thing.

Thanks,
Robin.
  

Patch

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index d5628a7b5eaa..b99db1eda72c 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -303,6 +303,7 @@  struct pmu {
 
 	struct module			*module;
 	struct device			*dev;
+	struct device			*parent;
 	const struct attribute_group	**attr_groups;
 	const struct attribute_group	**attr_update;
 	const char			*name;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index fb3e436bcd4a..a84c282221f2 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11367,6 +11367,7 @@  static int pmu_dev_alloc(struct pmu *pmu)
 
 	dev_set_drvdata(pmu->dev, pmu);
 	pmu->dev->bus = &pmu_bus;
+	pmu->dev->parent = pmu->parent;
 	pmu->dev->release = pmu_dev_release;
 
 	ret = dev_set_name(pmu->dev, "%s", pmu->name);