[v18,5/7] kexec: exclude hot remove cpu from elfcorehdr notes

Message ID 20230131224236.122805-6-eric.devolder@oracle.com
State New
Headers
Series crash: Kernel handling of CPU and memory hot un/plug |

Commit Message

Eric DeVolder Jan. 31, 2023, 10:42 p.m. UTC
  In crash_prepare_elf64_headers(), the for_each_present_cpu() is
utilized to create the new elfcorehdr. When handling CPU hot
unplug/offline events, the CPU is still on the for_each_present_cpu()
list (not until the cpuhp state processing reaches CPUHP_OFFLINE does
the CPU exit the list). Thus the CPU must be explicitly excluded when
building the new list of CPUs.

This change identifies in handle_hotplug_event() the CPU to be
excluded, and the check for excluding the CPU in
crash_prepare_elf64_headers().

Signed-off-by: Eric DeVolder <eric.devolder@oracle.com>
Acked-by: Baoquan He <bhe@redhat.com>
---
 kernel/crash_core.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)
  

Comments

Thomas Gleixner Feb. 1, 2023, 11:33 a.m. UTC | #1
Eric!

On Tue, Jan 31 2023 at 17:42, Eric DeVolder wrote:
> --- a/kernel/crash_core.c
> +++ b/kernel/crash_core.c
> @@ -366,6 +366,14 @@ int crash_prepare_elf64_headers(struct kimage *image, struct crash_mem *mem,
>  
>  	/* Prepare one phdr of type PT_NOTE for each present CPU */
>  	for_each_present_cpu(cpu) {
> +#ifdef CONFIG_CRASH_HOTPLUG
> +		if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
> +			/* Skip the soon-to-be offlined cpu */
> +			if ((image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU) &&
> +				(cpu == image->offlinecpu))
> +				continue;
> +		}
> +#endif

I'm failing to see how the above is correct in any way. Look at the
following sequence of events:

     1) Offline CPU$N

        -> Prepare elf headers with CPU$N excluded

     2) Another hotplug operation != 'Online CPU$N'

        -> Prepare elf headers with CPU$N included

Also in case of loading the crash kernel in the situation where not all
present CPUs are online (think boot time SMT disable) then your
resulting crash image will contain all present CPUs and none of the
offline CPUs are excluded.

How does that make any sense at all?

This image->hp_action and image->offlinecpu dance is engineering
voodoo. You just can do:

        for_each_present_cpu(cpu) {
            if (!cpu_online(cpu))
            	continue;
            do_stuff(cpu);

which does the right thing in all situations and can be further
simplified to:

        for_each_online_cpu(cpu) {
            do_stuff(cpu);

without the need for ifdefs or whatever.

No?

Thanks,

        tglx
  
Sourabh Jain Feb. 6, 2023, 8:12 a.m. UTC | #2
Hello Thomas,

On 01/02/23 17:03, Thomas Gleixner wrote:
> Eric!
>
> On Tue, Jan 31 2023 at 17:42, Eric DeVolder wrote:
>> --- a/kernel/crash_core.c
>> +++ b/kernel/crash_core.c
>> @@ -366,6 +366,14 @@ int crash_prepare_elf64_headers(struct kimage *image, struct crash_mem *mem,
>>   
>>   	/* Prepare one phdr of type PT_NOTE for each present CPU */
>>   	for_each_present_cpu(cpu) {
>> +#ifdef CONFIG_CRASH_HOTPLUG
>> +		if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
>> +			/* Skip the soon-to-be offlined cpu */
>> +			if ((image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU) &&
>> +				(cpu == image->offlinecpu))
>> +				continue;
>> +		}
>> +#endif
> I'm failing to see how the above is correct in any way. Look at the
> following sequence of events:
>
>       1) Offline CPU$N
>
>          -> Prepare elf headers with CPU$N excluded
>
>       2) Another hotplug operation != 'Online CPU$N'
>
>          -> Prepare elf headers with CPU$N included
>
> Also in case of loading the crash kernel in the situation where not all
> present CPUs are online (think boot time SMT disable) then your
> resulting crash image will contain all present CPUs and none of the
> offline CPUs are excluded.
>
> How does that make any sense at all?
>
> This image->hp_action and image->offlinecpu dance is engineering
> voodoo. You just can do:
>
>          for_each_present_cpu(cpu) {
>              if (!cpu_online(cpu))
>              	continue;
>              do_stuff(cpu);
>
> which does the right thing in all situations and can be further
> simplified to:
>
>          for_each_online_cpu(cpu) {
>              do_stuff(cpu);

What will be the implication on x86 if we pack PT_NOTE for possible CPUs?

IIUC, on boot the crash notes are create for possible CPUs using pcpu_alloc
and when the system is on crash path the crash notes for online CPUs is
populated with the required data and rest crash notes are untouched.

And I think the /proc/vmcore generation in kdump/second kernel and 
makedumpfile do
take care of empty crash notes belong to offline CPUs.

Any thoughts?

Thanks,
Sourabh
  
Thomas Gleixner Feb. 6, 2023, 1:03 p.m. UTC | #3
On Mon, Feb 06 2023 at 13:42, Sourabh Jain wrote:
> On 01/02/23 17:03, Thomas Gleixner wrote:
>> Also in case of loading the crash kernel in the situation where not all
>> present CPUs are online (think boot time SMT disable) then your
>> resulting crash image will contain all present CPUs and none of the
>> offline CPUs are excluded.
>>
>> How does that make any sense at all?
>>
>> This image->hp_action and image->offlinecpu dance is engineering
>> voodoo. You just can do:
>>
>>          for_each_present_cpu(cpu) {
>>              if (!cpu_online(cpu))
>>              	continue;
>>              do_stuff(cpu);
>>
>> which does the right thing in all situations and can be further
>> simplified to:
>>
>>          for_each_online_cpu(cpu) {
>>              do_stuff(cpu);
>
> What will be the implication on x86 if we pack PT_NOTE for possible
> CPUs?

I don't know.

> IIUC, on boot the crash notes are create for possible CPUs using pcpu_alloc
> and when the system is on crash path the crash notes for online CPUs is
> populated with the required data and rest crash notes are untouched.

Which should be fine. That's a problem of postprocessing and it's
unclear to me from the changelogs what the actual problem is which is
trying to be solved here.

Thanks,

        tglx
  
Eric DeVolder Feb. 7, 2023, 5:23 p.m. UTC | #4
On 2/1/23 05:33, Thomas Gleixner wrote:
> Eric!
> 
> On Tue, Jan 31 2023 at 17:42, Eric DeVolder wrote:
>> --- a/kernel/crash_core.c
>> +++ b/kernel/crash_core.c
>> @@ -366,6 +366,14 @@ int crash_prepare_elf64_headers(struct kimage *image, struct crash_mem *mem,
>>   
>>   	/* Prepare one phdr of type PT_NOTE for each present CPU */
>>   	for_each_present_cpu(cpu) {
>> +#ifdef CONFIG_CRASH_HOTPLUG
>> +		if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
>> +			/* Skip the soon-to-be offlined cpu */
>> +			if ((image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU) &&
>> +				(cpu == image->offlinecpu))
>> +				continue;
>> +		}
>> +#endif
> 
> I'm failing to see how the above is correct in any way. Look at the
> following sequence of events:
> 
>       1) Offline CPU$N
> 
>          -> Prepare elf headers with CPU$N excluded
> 
>       2) Another hotplug operation != 'Online CPU$N'
> 
>          -> Prepare elf headers with CPU$N included
> 
> Also in case of loading the crash kernel in the situation where not all
> present CPUs are online (think boot time SMT disable) then your
> resulting crash image will contain all present CPUs and none of the
> offline CPUs are excluded.
> 
> How does that make any sense at all?
> 
> This image->hp_action and image->offlinecpu dance is engineering
> voodoo. You just can do:
> 
>          for_each_present_cpu(cpu) {
>              if (!cpu_online(cpu))
>              	continue;
>              do_stuff(cpu);
> 
> which does the right thing in all situations and can be further
> simplified to:
> 
>          for_each_online_cpu(cpu) {
>              do_stuff(cpu);
> 
> without the need for ifdefs or whatever.
> 
> No?
> 
> Thanks,
> 
>          tglx

Thomas,

I've been re-examining the cpuhp framework and understand a bit better its
operation.

Up until now, this patch series has been using either CPUHP_AP_ONLINE_DYN
or more recently CPUHP_BP_PREPARE_DYN with the same handler for both the
startup and teardown callbacks. This resulted in the cpu state, as seen by
my handler, as being incorrect in one direction or the other. For example,
when using CPUHP_AP_ONLINE_DYN, cpu_online() always resulted in 1 for the
cpu in my callback, even during tear down. For CPUHP_BP_PREPARE_DYN,
cpu_online() always resulted in 0. Thus the offlinecpu voodoo.

But no more!

The reason, as I now understand, is simple. A cpu will not show as online
until after state CPUHP_BRINGUP_CPU (when working from CPUHP_OFFLINE towards
CPUHP_ONLINE). And a cpu will not show as offline until after state
CPUHP_TEARDOWN_CPU (when working reverse order from CPUHP_ONLINE to
CPUHP_OFFLINE).

The CPUHP_BRINGUP_CPU is the last state of the PREPARE section, and boots
the new cpu. It is code running on the booting cpu that marks itself as
online.

  CPUHP_BRINGUP_CPU
    .startup()
      bringup_cpu()
        __cpu_up()
         smp_ops.cpu_up()
          native_cpu_up()
           do_boot_cpu()
            ===== on new cpu! =====
            start_secondary()
             set_cpu_online(true)

There are quite a few CPUHP_..._STARTING states before the cpu is in a productive state.

The CPUHP_TEARDOWN_CPU is the last state in the STARTING section, and takes the cpu down.
Work/irqs are removed from this cpu and re-assigned to others.

  CPUHP_TEARDOWN_CPU
    .teardown()
     takedown_cpu()
      take_cpu_down()
       __cpu_disable()
        smp_ops.cpu_disable()
         native_cpu_disable()
          cpu_disable_common()
           remove_cpu_from_maps()
            set_cpu_online(false)

So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.

The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
attempts at locating this state failed when inside the STARTING section, so I located
this just inside the ONLINE sectoin. The crash hotplug handler is registered on
this state as the callback for the .startup method.

The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
placed it at the end of the PREPARE section. This crash hotplug handler is also
registered on this state as the callback for the .teardown method.

diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index 6c6859bfc454..52d2db4d793e 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -131,6 +131,7 @@ enum cpuhp_state {
     CPUHP_ZCOMP_PREPARE,
     CPUHP_TIMERS_PREPARE,
     CPUHP_MIPS_SOC_PREPARE,
+   CPUHP_BP_ELFCOREHDR_OFFLINE,
     CPUHP_BP_PREPARE_DYN,
     CPUHP_BP_PREPARE_DYN_END        = CPUHP_BP_PREPARE_DYN + 20,
     CPUHP_BRINGUP_CPU,
@@ -205,6 +206,7 @@ enum cpuhp_state {

     /* Online section invoked on the hotplugged CPU from the hotplug thread */
     CPUHP_AP_ONLINE_IDLE,
+   CPUHP_AP_ELFCOREHDR_ONLINE,
     CPUHP_AP_SCHED_WAIT_EMPTY,
     CPUHP_AP_SMPBOOT_THREADS,
     CPUHP_AP_X86_VDSO_VMA_ONLINE,

diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index 8a439b6d723b..e1a3430f06f4 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c

+   if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
+       result = cpuhp_setup_state_nocalls(CPUHP_AP_ELFCOREHDR_ONLINE,
+                          "crash/cpuhp_online", crash_cpuhp_online, NULL);
+       result = cpuhp_setup_state_nocalls(CPUHP_BP_ELFCOREHDR_OFFLINE,
+                          "crash/cpuhp_offline", NULL, crash_cpuhp_offline);
+   }

With the above, there is no need for offlinecpu, as the crash hotplug handler
callback now observes the correct cpu_online() state in both online and offline
activities.

Which leads me to the next item. Thomas you suggested

           for_each_online_cpu(cpu) {
               do_stuff(cpu);

I've been looking into this further, and don't yet have conclusion.
In light of Sourabh's comments/concerns about packing PT_NOTES, I
need to determine if my introduction of

        if (IS_ENABLED(CONFIG_CRASH_HOTPLUG)) {
            if (!cpu_online(cpu)) continue;
        }

does not cause other downstream issues. My testing was focused on
hot plug/unplugging cpus in a last-on-first-off manner, where as
I now realize cpus can be onlined/offlined sparsely (thus the PT_NOTE
packing concern).

I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
makedumpfile and (the consumer of it all) the userspace crash utility,
in order to understand the impact of moving from for_each_present_cpu()
to for_each_online_cpu().

At any rate, I wanted to at least put forth the introduction of the
two new CPUHP states and solicit feedback there while I investigate
the for_each_online_cpu() matter.

Thanks for pushing me on this topic!
eric
  
Thomas Gleixner Feb. 8, 2023, 1:44 p.m. UTC | #5
Eric!

On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
> On 2/1/23 05:33, Thomas Gleixner wrote:
>
> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>
> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
> attempts at locating this state failed when inside the STARTING section, so I located
> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
> this state as the callback for the .startup method.
>
> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
> placed it at the end of the PREPARE section. This crash hotplug handler is also
> registered on this state as the callback for the .teardown method.

TBH, that's still overengineered. Something like this:

bool cpu_is_alive(unsigned int cpu)
{
	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);

	return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
}

and use this to query the actual state at crash time. That spares all
those callback heuristics.

> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
> makedumpfile and (the consumer of it all) the userspace crash utility,
> in order to understand the impact of moving from for_each_present_cpu()
> to for_each_online_cpu().

Is the packing actually worth the trouble? What's the actual win?

Thanks,

        tglx
  
Eric DeVolder Feb. 9, 2023, 5:31 p.m. UTC | #6
On 2/8/23 07:44, Thomas Gleixner wrote:
> Eric!
> 
> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>
>> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>>
>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
>> attempts at locating this state failed when inside the STARTING section, so I located
>> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
>> this state as the callback for the .startup method.
>>
>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
>> placed it at the end of the PREPARE section. This crash hotplug handler is also
>> registered on this state as the callback for the .teardown method.
> 
> TBH, that's still overengineered. Something like this:
> 
> bool cpu_is_alive(unsigned int cpu)
> {
> 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
> 
> 	return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
> }
> 
> and use this to query the actual state at crash time. That spares all
> those callback heuristics.
> 
>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>> makedumpfile and (the consumer of it all) the userspace crash utility,
>> in order to understand the impact of moving from for_each_present_cpu()
>> to for_each_online_cpu().
> 
> Is the packing actually worth the trouble? What's the actual win?
> 
> Thanks,
> 
>          tglx
> 
> 

Thomas,
I've investigated the passing of crash notes through the vmcore. What I've learned is that:

- linux/fs/proc/vmcore.c (which makedumpfile references to do its job) does
   not care what the contents of cpu PT_NOTES are, but it does coalesce them together.

- makedumpfile will count the number of cpu PT_NOTES in order to determine its
   nr_cpus variable, which is reported in a header, but otherwise unused (except
   for sadump method).

- the crash utility, for the purposes of determining the cpus, does not appear to
   reference the elfcorehdr PT_NOTEs. Instead it locates the various
   cpu_[possible|present|online]_mask and computes nr_cpus from that, and also of
   course which are online. In addition, when crash does reference the cpu PT_NOTE,
   to get its prstatus, it does so by using a percpu technique directly in the vmcore
   image memory, not via the ELF structure. Said differently, it appears to me that
   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it obtains them
   via kernel cpumasks and the memory within the vmcore.

With this understanding, I did some testing. Perhaps the most telling test was that I
changed the number of cpu PT_NOTEs emitted in the crash_prepare_elf64_headers() to just 1,
hot plugged some cpus, then also took a few offline sparsely via chcpu, then generated a
vmcore. The crash utility had no problem loading the vmcore, it reported the proper number
of cpus and the number offline (despite only one cpu PT_NOTE), and changing to a different
cpu via 'set -c 30' and the backtrace was completely valid.

My take away is that crash utility does not rely upon ELF cpu PT_NOTEs, it obtains the
cpu information directly from kernel data structures. Perhaps at one time crash relied
upon the ELF information, but no more. (Perhaps there are other crash dump analyzers
that might rely on the ELF info?)

So, all this to say that I see no need to change crash_prepare_elf64_headers(). There
is no compelling reason to move away from for_each_present_cpu(), or modify the list for
online/offline.

Which then leaves the topic of the cpuhp state on which to register. Perhaps reverting
back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There does not appear to
be a compelling need to accurately track whether the cpu went online/offline for the
purposes of creating the elfcorehdr, as ultimately the crash utility pulls that from
kernel data structures, not the elfcorehdr.

I think this is what Sourabh has known and has been advocating for an optimization
path that allows not regenerating the elfcorehdr on cpu changes (because all the percpu
structs are all laid out). I do think it best to leave that as an arch choice.

Comments?

Thanks!
eric
  
Sourabh Jain Feb. 9, 2023, 6:43 p.m. UTC | #7
Hello Eric,

On 09/02/23 23:01, Eric DeVolder wrote:
>
>
> On 2/8/23 07:44, Thomas Gleixner wrote:
>> Eric!
>>
>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>
>>> So my latest solution is introduce two new CPUHP states, 
>>> CPUHP_AP_ELFCOREHDR_ONLINE
>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open 
>>> to better names.
>>>
>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after 
>>> CPUHP_BRINGUP_CPU. My
>>> attempts at locating this state failed when inside the STARTING 
>>> section, so I located
>>> this just inside the ONLINE sectoin. The crash hotplug handler is 
>>> registered on
>>> this state as the callback for the .startup method.
>>>
>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before 
>>> CPUHP_TEARDOWN_CPU, and I
>>> placed it at the end of the PREPARE section. This crash hotplug 
>>> handler is also
>>> registered on this state as the callback for the .teardown method.
>>
>> TBH, that's still overengineered. Something like this:
>>
>> bool cpu_is_alive(unsigned int cpu)
>> {
>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>
>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>> }
>>
>> and use this to query the actual state at crash time. That spares all
>> those callback heuristics.
>>
>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>> makedumpfile and (the consumer of it all) the userspace crash utility,
>>> in order to understand the impact of moving from for_each_present_cpu()
>>> to for_each_online_cpu().
>>
>> Is the packing actually worth the trouble? What's the actual win?
>>
>> Thanks,
>>
>>          tglx
>>
>>
>
> Thomas,
> I've investigated the passing of crash notes through the vmcore. What 
> I've learned is that:
>
> - linux/fs/proc/vmcore.c (which makedumpfile references to do its job) 
> does
>   not care what the contents of cpu PT_NOTES are, but it does coalesce 
> them together.
>
> - makedumpfile will count the number of cpu PT_NOTES in order to 
> determine its
>   nr_cpus variable, which is reported in a header, but otherwise 
> unused (except
>   for sadump method).
>
> - the crash utility, for the purposes of determining the cpus, does 
> not appear to
>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>   cpu_[possible|present|online]_mask and computes nr_cpus from that, 
> and also of
>   course which are online. In addition, when crash does reference the 
> cpu PT_NOTE,
>   to get its prstatus, it does so by using a percpu technique directly 
> in the vmcore
>   image memory, not via the ELF structure. Said differently, it 
> appears to me that
>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it 
> obtains them
>   via kernel cpumasks and the memory within the vmcore.
>
> With this understanding, I did some testing. Perhaps the most telling 
> test was that I
> changed the number of cpu PT_NOTEs emitted in the 
> crash_prepare_elf64_headers() to just 1,
> hot plugged some cpus, then also took a few offline sparsely via 
> chcpu, then generated a
> vmcore. The crash utility had no problem loading the vmcore, it 
> reported the proper number
> of cpus and the number offline (despite only one cpu PT_NOTE), and 
> changing to a different
> cpu via 'set -c 30' and the backtrace was completely valid.
>
> My take away is that crash utility does not rely upon ELF cpu 
> PT_NOTEs, it obtains the
> cpu information directly from kernel data structures. Perhaps at one 
> time crash relied
> upon the ELF information, but no more. (Perhaps there are other crash 
> dump analyzers
> that might rely on the ELF info?)
>
> So, all this to say that I see no need to change 
> crash_prepare_elf64_headers(). There
> is no compelling reason to move away from for_each_present_cpu(), or 
> modify the list for
> online/offline.
>
> Which then leaves the topic of the cpuhp state on which to register. 
> Perhaps reverting
> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There 
> does not appear to
> be a compelling need to accurately track whether the cpu went 
> online/offline for the
> purposes of creating the elfcorehdr, as ultimately the crash utility 
> pulls that from
> kernel data structures, not the elfcorehdr.
>
> I think this is what Sourabh has known and has been advocating for an 
> optimization
> path that allows not regenerating the elfcorehdr on cpu changes 
> (because all the percpu
> structs are all laid out). I do think it best to leave that as an arch 
> choice.

Since things are clear on how the PT_NOTES are consumed in kdump kernel 
[fs/proc/vmcore.c],
makedumpfile, and crash tool I need your opinion on this:

Do we really need to regenerate elfcorehdr for CPU hotplug events?
If yes, can you please list the elfcorehdr components that changes due 
to CPU hotplug.

 From what I understood, crash notes are prepared for possible CPUs as 
system boots and
could be used to create a PT_NOTE section for each possible CPU while 
generating the elfcorehdr
during the kdump kernel load.

Now once the elfcorehdr is loaded with PT_NOTEs for every possible CPU 
there is no need to
regenerate it for CPU hotplug events. Or do we?

Thanks,
Sourabh Jain
  
Eric DeVolder Feb. 9, 2023, 7:39 p.m. UTC | #8
On 2/9/23 12:43, Sourabh Jain wrote:
> Hello Eric,
> 
> On 09/02/23 23:01, Eric DeVolder wrote:
>>
>>
>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>> Eric!
>>>
>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>
>>>> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>>>>
>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
>>>> attempts at locating this state failed when inside the STARTING section, so I located
>>>> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
>>>> this state as the callback for the .startup method.
>>>>
>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
>>>> placed it at the end of the PREPARE section. This crash hotplug handler is also
>>>> registered on this state as the callback for the .teardown method.
>>>
>>> TBH, that's still overengineered. Something like this:
>>>
>>> bool cpu_is_alive(unsigned int cpu)
>>> {
>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>
>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>> }
>>>
>>> and use this to query the actual state at crash time. That spares all
>>> those callback heuristics.
>>>
>>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>>> makedumpfile and (the consumer of it all) the userspace crash utility,
>>>> in order to understand the impact of moving from for_each_present_cpu()
>>>> to for_each_online_cpu().
>>>
>>> Is the packing actually worth the trouble? What's the actual win?
>>>
>>> Thanks,
>>>
>>>          tglx
>>>
>>>
>>
>> Thomas,
>> I've investigated the passing of crash notes through the vmcore. What I've learned is that:
>>
>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its job) does
>>   not care what the contents of cpu PT_NOTES are, but it does coalesce them together.
>>
>> - makedumpfile will count the number of cpu PT_NOTES in order to determine its
>>   nr_cpus variable, which is reported in a header, but otherwise unused (except
>>   for sadump method).
>>
>> - the crash utility, for the purposes of determining the cpus, does not appear to
>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>   cpu_[possible|present|online]_mask and computes nr_cpus from that, and also of
>>   course which are online. In addition, when crash does reference the cpu PT_NOTE,
>>   to get its prstatus, it does so by using a percpu technique directly in the vmcore
>>   image memory, not via the ELF structure. Said differently, it appears to me that
>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it obtains them
>>   via kernel cpumasks and the memory within the vmcore.
>>
>> With this understanding, I did some testing. Perhaps the most telling test was that I
>> changed the number of cpu PT_NOTEs emitted in the crash_prepare_elf64_headers() to just 1,
>> hot plugged some cpus, then also took a few offline sparsely via chcpu, then generated a
>> vmcore. The crash utility had no problem loading the vmcore, it reported the proper number
>> of cpus and the number offline (despite only one cpu PT_NOTE), and changing to a different
>> cpu via 'set -c 30' and the backtrace was completely valid.
>>
>> My take away is that crash utility does not rely upon ELF cpu PT_NOTEs, it obtains the
>> cpu information directly from kernel data structures. Perhaps at one time crash relied
>> upon the ELF information, but no more. (Perhaps there are other crash dump analyzers
>> that might rely on the ELF info?)
>>
>> So, all this to say that I see no need to change crash_prepare_elf64_headers(). There
>> is no compelling reason to move away from for_each_present_cpu(), or modify the list for
>> online/offline.
>>
>> Which then leaves the topic of the cpuhp state on which to register. Perhaps reverting
>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There does not appear to
>> be a compelling need to accurately track whether the cpu went online/offline for the
>> purposes of creating the elfcorehdr, as ultimately the crash utility pulls that from
>> kernel data structures, not the elfcorehdr.
>>
>> I think this is what Sourabh has known and has been advocating for an optimization
>> path that allows not regenerating the elfcorehdr on cpu changes (because all the percpu
>> structs are all laid out). I do think it best to leave that as an arch choice.
> 
> Since things are clear on how the PT_NOTES are consumed in kdump kernel [fs/proc/vmcore.c],
> makedumpfile, and crash tool I need your opinion on this:
> 
> Do we really need to regenerate elfcorehdr for CPU hotplug events?
> If yes, can you please list the elfcorehdr components that changes due to CPU hotplug.
Due to the use of for_each_present_cpu(), it is possible for the number of cpu PT_NOTEs
to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does not impact the
number of cpu PT_NOTEs (as the cpus are still present).

> 
>  From what I understood, crash notes are prepared for possible CPUs as system boots and
> could be used to create a PT_NOTE section for each possible CPU while generating the elfcorehdr
> during the kdump kernel load.
> 
> Now once the elfcorehdr is loaded with PT_NOTEs for every possible CPU there is no need to
> regenerate it for CPU hotplug events. Or do we?

For onlining/offlining of cpus, there is no need to regenerate the elfcorehdr. However,
for actual hot un/plug of cpus, the answer is yes due to for_each_present_cpu(). The
caveat here of course is that if crash utility is the only coredump analyzer of concern,
then it doesn't care about these cpu PT_NOTEs and there would be no need to re-generate them.

Also, I'm not sure if ARM cpu hotplug, which is just now coming into mainstream, impacts
any of this.

Perhaps the one item that might help here is to distinguish between actual hot un/plug of
cpus, versus onlining/offlining. At the moment, I can not distinguish between a hot plug
event and an online event (and unplug/offline). If those were distinguishable, then we
could only regenerate on un/plug events.

Or perhaps moving to for_each_possible_cpu() is the better choice?

eric


> 
> Thanks,
> Sourabh Jain
  
Sourabh Jain Feb. 10, 2023, 6:29 a.m. UTC | #9
On 10/02/23 01:09, Eric DeVolder wrote:
>
>
> On 2/9/23 12:43, Sourabh Jain wrote:
>> Hello Eric,
>>
>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>
>>>
>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>> Eric!
>>>>
>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>
>>>>> So my latest solution is introduce two new CPUHP states, 
>>>>> CPUHP_AP_ELFCOREHDR_ONLINE
>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm 
>>>>> open to better names.
>>>>>
>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after 
>>>>> CPUHP_BRINGUP_CPU. My
>>>>> attempts at locating this state failed when inside the STARTING 
>>>>> section, so I located
>>>>> this just inside the ONLINE sectoin. The crash hotplug handler is 
>>>>> registered on
>>>>> this state as the callback for the .startup method.
>>>>>
>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before 
>>>>> CPUHP_TEARDOWN_CPU, and I
>>>>> placed it at the end of the PREPARE section. This crash hotplug 
>>>>> handler is also
>>>>> registered on this state as the callback for the .teardown method.
>>>>
>>>> TBH, that's still overengineered. Something like this:
>>>>
>>>> bool cpu_is_alive(unsigned int cpu)
>>>> {
>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>
>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>> }
>>>>
>>>> and use this to query the actual state at crash time. That spares all
>>>> those callback heuristics.
>>>>
>>>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>>>> makedumpfile and (the consumer of it all) the userspace crash 
>>>>> utility,
>>>>> in order to understand the impact of moving from 
>>>>> for_each_present_cpu()
>>>>> to for_each_online_cpu().
>>>>
>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>
>>>> Thanks,
>>>>
>>>>          tglx
>>>>
>>>>
>>>
>>> Thomas,
>>> I've investigated the passing of crash notes through the vmcore. 
>>> What I've learned is that:
>>>
>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its 
>>> job) does
>>>   not care what the contents of cpu PT_NOTES are, but it does 
>>> coalesce them together.
>>>
>>> - makedumpfile will count the number of cpu PT_NOTES in order to 
>>> determine its
>>>   nr_cpus variable, which is reported in a header, but otherwise 
>>> unused (except
>>>   for sadump method).
>>>
>>> - the crash utility, for the purposes of determining the cpus, does 
>>> not appear to
>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>   cpu_[possible|present|online]_mask and computes nr_cpus from that, 
>>> and also of
>>>   course which are online. In addition, when crash does reference 
>>> the cpu PT_NOTE,
>>>   to get its prstatus, it does so by using a percpu technique 
>>> directly in the vmcore
>>>   image memory, not via the ELF structure. Said differently, it 
>>> appears to me that
>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it 
>>> obtains them
>>>   via kernel cpumasks and the memory within the vmcore.
>>>
>>> With this understanding, I did some testing. Perhaps the most 
>>> telling test was that I
>>> changed the number of cpu PT_NOTEs emitted in the 
>>> crash_prepare_elf64_headers() to just 1,
>>> hot plugged some cpus, then also took a few offline sparsely via 
>>> chcpu, then generated a
>>> vmcore. The crash utility had no problem loading the vmcore, it 
>>> reported the proper number
>>> of cpus and the number offline (despite only one cpu PT_NOTE), and 
>>> changing to a different
>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>
>>> My take away is that crash utility does not rely upon ELF cpu 
>>> PT_NOTEs, it obtains the
>>> cpu information directly from kernel data structures. Perhaps at one 
>>> time crash relied
>>> upon the ELF information, but no more. (Perhaps there are other 
>>> crash dump analyzers
>>> that might rely on the ELF info?)
>>>
>>> So, all this to say that I see no need to change 
>>> crash_prepare_elf64_headers(). There
>>> is no compelling reason to move away from for_each_present_cpu(), or 
>>> modify the list for
>>> online/offline.
>>>
>>> Which then leaves the topic of the cpuhp state on which to register. 
>>> Perhaps reverting
>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There 
>>> does not appear to
>>> be a compelling need to accurately track whether the cpu went 
>>> online/offline for the
>>> purposes of creating the elfcorehdr, as ultimately the crash utility 
>>> pulls that from
>>> kernel data structures, not the elfcorehdr.
>>>
>>> I think this is what Sourabh has known and has been advocating for 
>>> an optimization
>>> path that allows not regenerating the elfcorehdr on cpu changes 
>>> (because all the percpu
>>> structs are all laid out). I do think it best to leave that as an 
>>> arch choice.
>>
>> Since things are clear on how the PT_NOTES are consumed in kdump 
>> kernel [fs/proc/vmcore.c],
>> makedumpfile, and crash tool I need your opinion on this:
>>
>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>> If yes, can you please list the elfcorehdr components that changes 
>> due to CPU hotplug.
> Due to the use of for_each_present_cpu(), it is possible for the 
> number of cpu PT_NOTEs
> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does 
> not impact the
> number of cpu PT_NOTEs (as the cpus are still present).
>
>>
>>  From what I understood, crash notes are prepared for possible CPUs 
>> as system boots and
>> could be used to create a PT_NOTE section for each possible CPU while 
>> generating the elfcorehdr
>> during the kdump kernel load.
>>
>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible 
>> CPU there is no need to
>> regenerate it for CPU hotplug events. Or do we?
>
> For onlining/offlining of cpus, there is no need to regenerate the 
> elfcorehdr. However,
> for actual hot un/plug of cpus, the answer is yes due to 
> for_each_present_cpu(). The
> caveat here of course is that if crash utility is the only coredump 
> analyzer of concern,
> then it doesn't care about these cpu PT_NOTEs and there would be no 
> need to re-generate them.
>
> Also, I'm not sure if ARM cpu hotplug, which is just now coming into 
> mainstream, impacts
> any of this.
>
> Perhaps the one item that might help here is to distinguish between 
> actual hot un/plug of
> cpus, versus onlining/offlining. At the moment, I can not distinguish 
> between a hot plug
> event and an online event (and unplug/offline). If those were 
> distinguishable, then we
> could only regenerate on un/plug events.
>
> Or perhaps moving to for_each_possible_cpu() is the better choice?

Yes, because once elfcorehdr is built with possible CPUs we don't have 
to worry about
hot[un]plug case.

Here is my view on how things should be handled if a core-dump analyzer 
is dependent on
elfcorehdr PT_NOTEs to find online/offline CPUs.

A PT_NOTE in elfcorehdr holds the address of the corresponding crash 
notes (kernel has
one crash note per CPU for every possible CPU). Though the crash notes 
are allocated
during the boot time they are populated when the system is on the crash 
path.

This is how crash notes are populated on PowerPC and I am expecting it 
would be something
similar on other architectures too.

The crashing CPU sends IPI to every other online CPU with a callback 
function that updates the
crash notes of that specific CPU. Once the IPI completes the crashing 
CPU updates its own crash
note and proceeds further.

The crash notes of CPUs remain uninitialized if the CPUs were offline or 
hot unplugged at the time
system crash. The core-dump analyzer should be able to identify 
[un]/initialized crash notes
and display the information accordingly.

Thoughts?

- Sourabh
  
Eric DeVolder Feb. 11, 2023, 12:35 a.m. UTC | #10
On 2/10/23 00:29, Sourabh Jain wrote:
> 
> On 10/02/23 01:09, Eric DeVolder wrote:
>>
>>
>> On 2/9/23 12:43, Sourabh Jain wrote:
>>> Hello Eric,
>>>
>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>
>>>>
>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>> Eric!
>>>>>
>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>
>>>>>> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>>>>>>
>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
>>>>>> attempts at locating this state failed when inside the STARTING section, so I located
>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
>>>>>> this state as the callback for the .startup method.
>>>>>>
>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
>>>>>> placed it at the end of the PREPARE section. This crash hotplug handler is also
>>>>>> registered on this state as the callback for the .teardown method.
>>>>>
>>>>> TBH, that's still overengineered. Something like this:
>>>>>
>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>> {
>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>
>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>> }
>>>>>
>>>>> and use this to query the actual state at crash time. That spares all
>>>>> those callback heuristics.
>>>>>
>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>>>>> makedumpfile and (the consumer of it all) the userspace crash utility,
>>>>>> in order to understand the impact of moving from for_each_present_cpu()
>>>>>> to for_each_online_cpu().
>>>>>
>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>
>>>>> Thanks,
>>>>>
>>>>>          tglx
>>>>>
>>>>>
>>>>
>>>> Thomas,
>>>> I've investigated the passing of crash notes through the vmcore. What I've learned is that:
>>>>
>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its job) does
>>>>   not care what the contents of cpu PT_NOTES are, but it does coalesce them together.
>>>>
>>>> - makedumpfile will count the number of cpu PT_NOTES in order to determine its
>>>>   nr_cpus variable, which is reported in a header, but otherwise unused (except
>>>>   for sadump method).
>>>>
>>>> - the crash utility, for the purposes of determining the cpus, does not appear to
>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from that, and also of
>>>>   course which are online. In addition, when crash does reference the cpu PT_NOTE,
>>>>   to get its prstatus, it does so by using a percpu technique directly in the vmcore
>>>>   image memory, not via the ELF structure. Said differently, it appears to me that
>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it obtains them
>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>
>>>> With this understanding, I did some testing. Perhaps the most telling test was that I
>>>> changed the number of cpu PT_NOTEs emitted in the crash_prepare_elf64_headers() to just 1,
>>>> hot plugged some cpus, then also took a few offline sparsely via chcpu, then generated a
>>>> vmcore. The crash utility had no problem loading the vmcore, it reported the proper number
>>>> of cpus and the number offline (despite only one cpu PT_NOTE), and changing to a different
>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>
>>>> My take away is that crash utility does not rely upon ELF cpu PT_NOTEs, it obtains the
>>>> cpu information directly from kernel data structures. Perhaps at one time crash relied
>>>> upon the ELF information, but no more. (Perhaps there are other crash dump analyzers
>>>> that might rely on the ELF info?)
>>>>
>>>> So, all this to say that I see no need to change crash_prepare_elf64_headers(). There
>>>> is no compelling reason to move away from for_each_present_cpu(), or modify the list for
>>>> online/offline.
>>>>
>>>> Which then leaves the topic of the cpuhp state on which to register. Perhaps reverting
>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There does not appear to
>>>> be a compelling need to accurately track whether the cpu went online/offline for the
>>>> purposes of creating the elfcorehdr, as ultimately the crash utility pulls that from
>>>> kernel data structures, not the elfcorehdr.
>>>>
>>>> I think this is what Sourabh has known and has been advocating for an optimization
>>>> path that allows not regenerating the elfcorehdr on cpu changes (because all the percpu
>>>> structs are all laid out). I do think it best to leave that as an arch choice.
>>>
>>> Since things are clear on how the PT_NOTES are consumed in kdump kernel [fs/proc/vmcore.c],
>>> makedumpfile, and crash tool I need your opinion on this:
>>>
>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>> If yes, can you please list the elfcorehdr components that changes due to CPU hotplug.
>> Due to the use of for_each_present_cpu(), it is possible for the number of cpu PT_NOTEs
>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does not impact the
>> number of cpu PT_NOTEs (as the cpus are still present).
>>
>>>
>>>  From what I understood, crash notes are prepared for possible CPUs as system boots and
>>> could be used to create a PT_NOTE section for each possible CPU while generating the elfcorehdr
>>> during the kdump kernel load.
>>>
>>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible CPU there is no need to
>>> regenerate it for CPU hotplug events. Or do we?
>>
>> For onlining/offlining of cpus, there is no need to regenerate the elfcorehdr. However,
>> for actual hot un/plug of cpus, the answer is yes due to for_each_present_cpu(). The
>> caveat here of course is that if crash utility is the only coredump analyzer of concern,
>> then it doesn't care about these cpu PT_NOTEs and there would be no need to re-generate them.
>>
>> Also, I'm not sure if ARM cpu hotplug, which is just now coming into mainstream, impacts
>> any of this.
>>
>> Perhaps the one item that might help here is to distinguish between actual hot un/plug of
>> cpus, versus onlining/offlining. At the moment, I can not distinguish between a hot plug
>> event and an online event (and unplug/offline). If those were distinguishable, then we
>> could only regenerate on un/plug events.
>>
>> Or perhaps moving to for_each_possible_cpu() is the better choice?
> 
> Yes, because once elfcorehdr is built with possible CPUs we don't have to worry about
> hot[un]plug case.
> 
> Here is my view on how things should be handled if a core-dump analyzer is dependent on
> elfcorehdr PT_NOTEs to find online/offline CPUs.
> 
> A PT_NOTE in elfcorehdr holds the address of the corresponding crash notes (kernel has
> one crash note per CPU for every possible CPU). Though the crash notes are allocated
> during the boot time they are populated when the system is on the crash path.
> 
> This is how crash notes are populated on PowerPC and I am expecting it would be something
> similar on other architectures too.
> 
> The crashing CPU sends IPI to every other online CPU with a callback function that updates the
> crash notes of that specific CPU. Once the IPI completes the crashing CPU updates its own crash
> note and proceeds further.
> 
> The crash notes of CPUs remain uninitialized if the CPUs were offline or hot unplugged at the time
> system crash. The core-dump analyzer should be able to identify [un]/initialized crash notes
> and display the information accordingly.
> 
> Thoughts?
> 
> - Sourabh

In general, I agree with your points. You've presented a strong case to go with 
for_each_possible_cpu() in crash_prepare_elf64_headers() and those crash notes would always be 
present, and we can ignore changes to cpus wrt/ elfcorehdr updates.

But what do we do about kexec_load() syscall? The way the userspace utility works is it determines 
cpus by:
  nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
which is not the equivalent of possible_cpus. So the complete list of cpu PT_NOTEs is not generated 
up front. We would need a solution for that?

Thanks,
eric

PS. I'll be on vacation all of next week, returning 20feb.
  
Sourabh Jain Feb. 13, 2023, 4:40 a.m. UTC | #11
On 11/02/23 06:05, Eric DeVolder wrote:
>
>
> On 2/10/23 00:29, Sourabh Jain wrote:
>>
>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>
>>>
>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>> Hello Eric,
>>>>
>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>
>>>>>
>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>> Eric!
>>>>>>
>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>
>>>>>>> So my latest solution is introduce two new CPUHP states, 
>>>>>>> CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm 
>>>>>>> open to better names.
>>>>>>>
>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after 
>>>>>>> CPUHP_BRINGUP_CPU. My
>>>>>>> attempts at locating this state failed when inside the STARTING 
>>>>>>> section, so I located
>>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler 
>>>>>>> is registered on
>>>>>>> this state as the callback for the .startup method.
>>>>>>>
>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before 
>>>>>>> CPUHP_TEARDOWN_CPU, and I
>>>>>>> placed it at the end of the PREPARE section. This crash hotplug 
>>>>>>> handler is also
>>>>>>> registered on this state as the callback for the .teardown method.
>>>>>>
>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>
>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>> {
>>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>
>>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>> }
>>>>>>
>>>>>> and use this to query the actual state at crash time. That spares 
>>>>>> all
>>>>>> those callback heuristics.
>>>>>>
>>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, 
>>>>>>> vmcoreinfo,
>>>>>>> makedumpfile and (the consumer of it all) the userspace crash 
>>>>>>> utility,
>>>>>>> in order to understand the impact of moving from 
>>>>>>> for_each_present_cpu()
>>>>>>> to for_each_online_cpu().
>>>>>>
>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>>          tglx
>>>>>>
>>>>>>
>>>>>
>>>>> Thomas,
>>>>> I've investigated the passing of crash notes through the vmcore. 
>>>>> What I've learned is that:
>>>>>
>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its 
>>>>> job) does
>>>>>   not care what the contents of cpu PT_NOTES are, but it does 
>>>>> coalesce them together.
>>>>>
>>>>> - makedumpfile will count the number of cpu PT_NOTES in order to 
>>>>> determine its
>>>>>   nr_cpus variable, which is reported in a header, but otherwise 
>>>>> unused (except
>>>>>   for sadump method).
>>>>>
>>>>> - the crash utility, for the purposes of determining the cpus, 
>>>>> does not appear to
>>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from 
>>>>> that, and also of
>>>>>   course which are online. In addition, when crash does reference 
>>>>> the cpu PT_NOTE,
>>>>>   to get its prstatus, it does so by using a percpu technique 
>>>>> directly in the vmcore
>>>>>   image memory, not via the ELF structure. Said differently, it 
>>>>> appears to me that
>>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather 
>>>>> it obtains them
>>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>>
>>>>> With this understanding, I did some testing. Perhaps the most 
>>>>> telling test was that I
>>>>> changed the number of cpu PT_NOTEs emitted in the 
>>>>> crash_prepare_elf64_headers() to just 1,
>>>>> hot plugged some cpus, then also took a few offline sparsely via 
>>>>> chcpu, then generated a
>>>>> vmcore. The crash utility had no problem loading the vmcore, it 
>>>>> reported the proper number
>>>>> of cpus and the number offline (despite only one cpu PT_NOTE), and 
>>>>> changing to a different
>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>
>>>>> My take away is that crash utility does not rely upon ELF cpu 
>>>>> PT_NOTEs, it obtains the
>>>>> cpu information directly from kernel data structures. Perhaps at 
>>>>> one time crash relied
>>>>> upon the ELF information, but no more. (Perhaps there are other 
>>>>> crash dump analyzers
>>>>> that might rely on the ELF info?)
>>>>>
>>>>> So, all this to say that I see no need to change 
>>>>> crash_prepare_elf64_headers(). There
>>>>> is no compelling reason to move away from for_each_present_cpu(), 
>>>>> or modify the list for
>>>>> online/offline.
>>>>>
>>>>> Which then leaves the topic of the cpuhp state on which to 
>>>>> register. Perhaps reverting
>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There 
>>>>> does not appear to
>>>>> be a compelling need to accurately track whether the cpu went 
>>>>> online/offline for the
>>>>> purposes of creating the elfcorehdr, as ultimately the crash 
>>>>> utility pulls that from
>>>>> kernel data structures, not the elfcorehdr.
>>>>>
>>>>> I think this is what Sourabh has known and has been advocating for 
>>>>> an optimization
>>>>> path that allows not regenerating the elfcorehdr on cpu changes 
>>>>> (because all the percpu
>>>>> structs are all laid out). I do think it best to leave that as an 
>>>>> arch choice.
>>>>
>>>> Since things are clear on how the PT_NOTES are consumed in kdump 
>>>> kernel [fs/proc/vmcore.c],
>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>
>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>> If yes, can you please list the elfcorehdr components that changes 
>>>> due to CPU hotplug.
>>> Due to the use of for_each_present_cpu(), it is possible for the 
>>> number of cpu PT_NOTEs
>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does 
>>> not impact the
>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>
>>>>
>>>>  From what I understood, crash notes are prepared for possible CPUs 
>>>> as system boots and
>>>> could be used to create a PT_NOTE section for each possible CPU 
>>>> while generating the elfcorehdr
>>>> during the kdump kernel load.
>>>>
>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible 
>>>> CPU there is no need to
>>>> regenerate it for CPU hotplug events. Or do we?
>>>
>>> For onlining/offlining of cpus, there is no need to regenerate the 
>>> elfcorehdr. However,
>>> for actual hot un/plug of cpus, the answer is yes due to 
>>> for_each_present_cpu(). The
>>> caveat here of course is that if crash utility is the only coredump 
>>> analyzer of concern,
>>> then it doesn't care about these cpu PT_NOTEs and there would be no 
>>> need to re-generate them.
>>>
>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming into 
>>> mainstream, impacts
>>> any of this.
>>>
>>> Perhaps the one item that might help here is to distinguish between 
>>> actual hot un/plug of
>>> cpus, versus onlining/offlining. At the moment, I can not 
>>> distinguish between a hot plug
>>> event and an online event (and unplug/offline). If those were 
>>> distinguishable, then we
>>> could only regenerate on un/plug events.
>>>
>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>
>> Yes, because once elfcorehdr is built with possible CPUs we don't 
>> have to worry about
>> hot[un]plug case.
>>
>> Here is my view on how things should be handled if a core-dump 
>> analyzer is dependent on
>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>
>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash 
>> notes (kernel has
>> one crash note per CPU for every possible CPU). Though the crash 
>> notes are allocated
>> during the boot time they are populated when the system is on the 
>> crash path.
>>
>> This is how crash notes are populated on PowerPC and I am expecting 
>> it would be something
>> similar on other architectures too.
>>
>> The crashing CPU sends IPI to every other online CPU with a callback 
>> function that updates the
>> crash notes of that specific CPU. Once the IPI completes the crashing 
>> CPU updates its own crash
>> note and proceeds further.
>>
>> The crash notes of CPUs remain uninitialized if the CPUs were offline 
>> or hot unplugged at the time
>> system crash. The core-dump analyzer should be able to identify 
>> [un]/initialized crash notes
>> and display the information accordingly.
>>
>> Thoughts?
>>
>> - Sourabh
>
> In general, I agree with your points. You've presented a strong case 
> to go with for_each_possible_cpu() in crash_prepare_elf64_headers() 
> and those crash notes would always be present, and we can ignore 
> changes to cpus wrt/ elfcorehdr updates.
>
> But what do we do about kexec_load() syscall? The way the userspace 
> utility works is it determines cpus by:
>  nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
> which is not the equivalent of possible_cpus. So the complete list of 
> cpu PT_NOTEs is not generated up front. We would need a solution for 
> that?
Hello Eric,

The sysconf document says _SC_NPROCESSORS_CONF is processors configured, 
isn't that equivalent to possible CPUs?

What exactly sysconf(_SC_NPROCESSORS_CONF) returns on x86? IIUC, on 
powerPC it is possible CPUs.

In case sysconf(_SC_NPROCESSORS_CONF) is not consistent then we can go with:
/sys/devices/system/cpu/possible for kexec_load case.

Thoughts?

- Sourabh Jain
  
Thomas Gleixner Feb. 13, 2023, 12:52 p.m. UTC | #12
On Mon, Feb 13 2023 at 10:10, Sourabh Jain wrote:
> The sysconf document says _SC_NPROCESSORS_CONF is processors configured, 
> isn't that equivalent to possible CPUs?

glibc tries to evaluate that in the following order:

  1) /sys/devices/system/cpu/cpu*

     That's present CPUs not possible CPUs

  2) /proc/stat

     That's online CPUs

  3) sched_getaffinity()

     That's online CPUs at best. In the worst case it's an affinity mask
     which is set on a process group

Thanks,

        tglx
  
Sourabh Jain Feb. 15, 2023, 2:53 a.m. UTC | #13
On 13/02/23 18:22, Thomas Gleixner wrote:
> On Mon, Feb 13 2023 at 10:10, Sourabh Jain wrote:
>> The sysconf document says _SC_NPROCESSORS_CONF is processors configured,
>> isn't that equivalent to possible CPUs?
> glibc tries to evaluate that in the following order:
>
>    1) /sys/devices/system/cpu/cpu*
>
>       That's present CPUs not possible CPUs
>
>    2) /proc/stat
>
>       That's online CPUs
>
>    3) sched_getaffinity()
>
>       That's online CPUs at best. In the worst case it's an affinity mask
>       which is set on a process group

Thanks for the clarification Thomas.

- Sourabh
  
Eric DeVolder Feb. 23, 2023, 8:34 p.m. UTC | #14
On 2/10/23 00:29, Sourabh Jain wrote:
> 
> On 10/02/23 01:09, Eric DeVolder wrote:
>>
>>
>> On 2/9/23 12:43, Sourabh Jain wrote:
>>> Hello Eric,
>>>
>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>
>>>>
>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>> Eric!
>>>>>
>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>
>>>>>> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>>>>>>
>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
>>>>>> attempts at locating this state failed when inside the STARTING section, so I located
>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
>>>>>> this state as the callback for the .startup method.
>>>>>>
>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
>>>>>> placed it at the end of the PREPARE section. This crash hotplug handler is also
>>>>>> registered on this state as the callback for the .teardown method.
>>>>>
>>>>> TBH, that's still overengineered. Something like this:
>>>>>
>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>> {
>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>
>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>> }
>>>>>
>>>>> and use this to query the actual state at crash time. That spares all
>>>>> those callback heuristics.
>>>>>
>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>>>>> makedumpfile and (the consumer of it all) the userspace crash utility,
>>>>>> in order to understand the impact of moving from for_each_present_cpu()
>>>>>> to for_each_online_cpu().
>>>>>
>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>
>>>>> Thanks,
>>>>>
>>>>>          tglx
>>>>>
>>>>>
>>>>
>>>> Thomas,
>>>> I've investigated the passing of crash notes through the vmcore. What I've learned is that:
>>>>
>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its job) does
>>>>   not care what the contents of cpu PT_NOTES are, but it does coalesce them together.
>>>>
>>>> - makedumpfile will count the number of cpu PT_NOTES in order to determine its
>>>>   nr_cpus variable, which is reported in a header, but otherwise unused (except
>>>>   for sadump method).
>>>>
>>>> - the crash utility, for the purposes of determining the cpus, does not appear to
>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from that, and also of
>>>>   course which are online. In addition, when crash does reference the cpu PT_NOTE,
>>>>   to get its prstatus, it does so by using a percpu technique directly in the vmcore
>>>>   image memory, not via the ELF structure. Said differently, it appears to me that
>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it obtains them
>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>
>>>> With this understanding, I did some testing. Perhaps the most telling test was that I
>>>> changed the number of cpu PT_NOTEs emitted in the crash_prepare_elf64_headers() to just 1,
>>>> hot plugged some cpus, then also took a few offline sparsely via chcpu, then generated a
>>>> vmcore. The crash utility had no problem loading the vmcore, it reported the proper number
>>>> of cpus and the number offline (despite only one cpu PT_NOTE), and changing to a different
>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>
>>>> My take away is that crash utility does not rely upon ELF cpu PT_NOTEs, it obtains the
>>>> cpu information directly from kernel data structures. Perhaps at one time crash relied
>>>> upon the ELF information, but no more. (Perhaps there are other crash dump analyzers
>>>> that might rely on the ELF info?)
>>>>
>>>> So, all this to say that I see no need to change crash_prepare_elf64_headers(). There
>>>> is no compelling reason to move away from for_each_present_cpu(), or modify the list for
>>>> online/offline.
>>>>
>>>> Which then leaves the topic of the cpuhp state on which to register. Perhaps reverting
>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There does not appear to
>>>> be a compelling need to accurately track whether the cpu went online/offline for the
>>>> purposes of creating the elfcorehdr, as ultimately the crash utility pulls that from
>>>> kernel data structures, not the elfcorehdr.
>>>>
>>>> I think this is what Sourabh has known and has been advocating for an optimization
>>>> path that allows not regenerating the elfcorehdr on cpu changes (because all the percpu
>>>> structs are all laid out). I do think it best to leave that as an arch choice.
>>>
>>> Since things are clear on how the PT_NOTES are consumed in kdump kernel [fs/proc/vmcore.c],
>>> makedumpfile, and crash tool I need your opinion on this:
>>>
>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>> If yes, can you please list the elfcorehdr components that changes due to CPU hotplug.
>> Due to the use of for_each_present_cpu(), it is possible for the number of cpu PT_NOTEs
>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does not impact the
>> number of cpu PT_NOTEs (as the cpus are still present).
>>
>>>
>>>  From what I understood, crash notes are prepared for possible CPUs as system boots and
>>> could be used to create a PT_NOTE section for each possible CPU while generating the elfcorehdr
>>> during the kdump kernel load.
>>>
>>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible CPU there is no need to
>>> regenerate it for CPU hotplug events. Or do we?
>>
>> For onlining/offlining of cpus, there is no need to regenerate the elfcorehdr. However,
>> for actual hot un/plug of cpus, the answer is yes due to for_each_present_cpu(). The
>> caveat here of course is that if crash utility is the only coredump analyzer of concern,
>> then it doesn't care about these cpu PT_NOTEs and there would be no need to re-generate them.
>>
>> Also, I'm not sure if ARM cpu hotplug, which is just now coming into mainstream, impacts
>> any of this.
>>
>> Perhaps the one item that might help here is to distinguish between actual hot un/plug of
>> cpus, versus onlining/offlining. At the moment, I can not distinguish between a hot plug
>> event and an online event (and unplug/offline). If those were distinguishable, then we
>> could only regenerate on un/plug events.
>>
>> Or perhaps moving to for_each_possible_cpu() is the better choice?
> 
> Yes, because once elfcorehdr is built with possible CPUs we don't have to worry about
> hot[un]plug case.
> 
> Here is my view on how things should be handled if a core-dump analyzer is dependent on
> elfcorehdr PT_NOTEs to find online/offline CPUs.
> 
> A PT_NOTE in elfcorehdr holds the address of the corresponding crash notes (kernel has
> one crash note per CPU for every possible CPU). Though the crash notes are allocated
> during the boot time they are populated when the system is on the crash path.
> 
> This is how crash notes are populated on PowerPC and I am expecting it would be something
> similar on other architectures too.
> 
> The crashing CPU sends IPI to every other online CPU with a callback function that updates the
> crash notes of that specific CPU. Once the IPI completes the crashing CPU updates its own crash
> note and proceeds further.
> 
> The crash notes of CPUs remain uninitialized if the CPUs were offline or hot unplugged at the time
> system crash. The core-dump analyzer should be able to identify [un]/initialized crash notes
> and display the information accordingly.
> 
> Thoughts?
> 
> - Sourabh

I've been examining what it would mean to move to for_each_possible_cpu() in 
crash_prepare_elf64_headers(). I think it means:

- Changing for_each_present_cpu() to for_each_possible_cpu() in crash_prepare_elf64_headers().
- For kexec_load() syscall path, rewrite the incoming/supplied elfcorehdr immediately on the load 
with the elfcorehdr generated by crash_prepare_elf64_headers().
- Eliminate/remove the cpuhp machinery for handling crash hotplug events.

This would then setup PT_NOTEs for all possible cpus, which should in theory accommodate crash 
analyzers that rely on ELF PT_NOTEs for crash_notes.

If staying with for_each_present_cpu() is ultimately decided, then I think leaving the cpuhp 
machinery in place and each arch could decide how to handle crash cpu hotplug events. The overhead 
for doing this is very minimal, and the events are likely very infrequent.

No matter which is decided, to support crash hotplug for kexec_load still requires changes to the 
userspace kexec-tools utility (for excluding the elfcorehdr from the purgatory hash, and providing 
an appropriately sized elfcorehdr buffer).

I know Sourabh votes for for_each_possible_cpu(), Thomas/Boris/Baoquan/others, I'd appreciate your 
opinion/insight here!

Thanks!
eric
  
Sourabh Jain Feb. 24, 2023, 8:34 a.m. UTC | #15
On 24/02/23 02:04, Eric DeVolder wrote:
>
>
> On 2/10/23 00:29, Sourabh Jain wrote:
>>
>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>
>>>
>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>> Hello Eric,
>>>>
>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>
>>>>>
>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>> Eric!
>>>>>>
>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>
>>>>>>> So my latest solution is introduce two new CPUHP states, 
>>>>>>> CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm 
>>>>>>> open to better names.
>>>>>>>
>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after 
>>>>>>> CPUHP_BRINGUP_CPU. My
>>>>>>> attempts at locating this state failed when inside the STARTING 
>>>>>>> section, so I located
>>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler 
>>>>>>> is registered on
>>>>>>> this state as the callback for the .startup method.
>>>>>>>
>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before 
>>>>>>> CPUHP_TEARDOWN_CPU, and I
>>>>>>> placed it at the end of the PREPARE section. This crash hotplug 
>>>>>>> handler is also
>>>>>>> registered on this state as the callback for the .teardown method.
>>>>>>
>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>
>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>> {
>>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>
>>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>> }
>>>>>>
>>>>>> and use this to query the actual state at crash time. That spares 
>>>>>> all
>>>>>> those callback heuristics.
>>>>>>
>>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, 
>>>>>>> vmcoreinfo,
>>>>>>> makedumpfile and (the consumer of it all) the userspace crash 
>>>>>>> utility,
>>>>>>> in order to understand the impact of moving from 
>>>>>>> for_each_present_cpu()
>>>>>>> to for_each_online_cpu().
>>>>>>
>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>>          tglx
>>>>>>
>>>>>>
>>>>>
>>>>> Thomas,
>>>>> I've investigated the passing of crash notes through the vmcore. 
>>>>> What I've learned is that:
>>>>>
>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its 
>>>>> job) does
>>>>>   not care what the contents of cpu PT_NOTES are, but it does 
>>>>> coalesce them together.
>>>>>
>>>>> - makedumpfile will count the number of cpu PT_NOTES in order to 
>>>>> determine its
>>>>>   nr_cpus variable, which is reported in a header, but otherwise 
>>>>> unused (except
>>>>>   for sadump method).
>>>>>
>>>>> - the crash utility, for the purposes of determining the cpus, 
>>>>> does not appear to
>>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from 
>>>>> that, and also of
>>>>>   course which are online. In addition, when crash does reference 
>>>>> the cpu PT_NOTE,
>>>>>   to get its prstatus, it does so by using a percpu technique 
>>>>> directly in the vmcore
>>>>>   image memory, not via the ELF structure. Said differently, it 
>>>>> appears to me that
>>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather 
>>>>> it obtains them
>>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>>
>>>>> With this understanding, I did some testing. Perhaps the most 
>>>>> telling test was that I
>>>>> changed the number of cpu PT_NOTEs emitted in the 
>>>>> crash_prepare_elf64_headers() to just 1,
>>>>> hot plugged some cpus, then also took a few offline sparsely via 
>>>>> chcpu, then generated a
>>>>> vmcore. The crash utility had no problem loading the vmcore, it 
>>>>> reported the proper number
>>>>> of cpus and the number offline (despite only one cpu PT_NOTE), and 
>>>>> changing to a different
>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>
>>>>> My take away is that crash utility does not rely upon ELF cpu 
>>>>> PT_NOTEs, it obtains the
>>>>> cpu information directly from kernel data structures. Perhaps at 
>>>>> one time crash relied
>>>>> upon the ELF information, but no more. (Perhaps there are other 
>>>>> crash dump analyzers
>>>>> that might rely on the ELF info?)
>>>>>
>>>>> So, all this to say that I see no need to change 
>>>>> crash_prepare_elf64_headers(). There
>>>>> is no compelling reason to move away from for_each_present_cpu(), 
>>>>> or modify the list for
>>>>> online/offline.
>>>>>
>>>>> Which then leaves the topic of the cpuhp state on which to 
>>>>> register. Perhaps reverting
>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There 
>>>>> does not appear to
>>>>> be a compelling need to accurately track whether the cpu went 
>>>>> online/offline for the
>>>>> purposes of creating the elfcorehdr, as ultimately the crash 
>>>>> utility pulls that from
>>>>> kernel data structures, not the elfcorehdr.
>>>>>
>>>>> I think this is what Sourabh has known and has been advocating for 
>>>>> an optimization
>>>>> path that allows not regenerating the elfcorehdr on cpu changes 
>>>>> (because all the percpu
>>>>> structs are all laid out). I do think it best to leave that as an 
>>>>> arch choice.
>>>>
>>>> Since things are clear on how the PT_NOTES are consumed in kdump 
>>>> kernel [fs/proc/vmcore.c],
>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>
>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>> If yes, can you please list the elfcorehdr components that changes 
>>>> due to CPU hotplug.
>>> Due to the use of for_each_present_cpu(), it is possible for the 
>>> number of cpu PT_NOTEs
>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does 
>>> not impact the
>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>
>>>>
>>>>  From what I understood, crash notes are prepared for possible CPUs 
>>>> as system boots and
>>>> could be used to create a PT_NOTE section for each possible CPU 
>>>> while generating the elfcorehdr
>>>> during the kdump kernel load.
>>>>
>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible 
>>>> CPU there is no need to
>>>> regenerate it for CPU hotplug events. Or do we?
>>>
>>> For onlining/offlining of cpus, there is no need to regenerate the 
>>> elfcorehdr. However,
>>> for actual hot un/plug of cpus, the answer is yes due to 
>>> for_each_present_cpu(). The
>>> caveat here of course is that if crash utility is the only coredump 
>>> analyzer of concern,
>>> then it doesn't care about these cpu PT_NOTEs and there would be no 
>>> need to re-generate them.
>>>
>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming into 
>>> mainstream, impacts
>>> any of this.
>>>
>>> Perhaps the one item that might help here is to distinguish between 
>>> actual hot un/plug of
>>> cpus, versus onlining/offlining. At the moment, I can not 
>>> distinguish between a hot plug
>>> event and an online event (and unplug/offline). If those were 
>>> distinguishable, then we
>>> could only regenerate on un/plug events.
>>>
>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>
>> Yes, because once elfcorehdr is built with possible CPUs we don't 
>> have to worry about
>> hot[un]plug case.
>>
>> Here is my view on how things should be handled if a core-dump 
>> analyzer is dependent on
>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>
>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash 
>> notes (kernel has
>> one crash note per CPU for every possible CPU). Though the crash 
>> notes are allocated
>> during the boot time they are populated when the system is on the 
>> crash path.
>>
>> This is how crash notes are populated on PowerPC and I am expecting 
>> it would be something
>> similar on other architectures too.
>>
>> The crashing CPU sends IPI to every other online CPU with a callback 
>> function that updates the
>> crash notes of that specific CPU. Once the IPI completes the crashing 
>> CPU updates its own crash
>> note and proceeds further.
>>
>> The crash notes of CPUs remain uninitialized if the CPUs were offline 
>> or hot unplugged at the time
>> system crash. The core-dump analyzer should be able to identify 
>> [un]/initialized crash notes
>> and display the information accordingly.
>>
>> Thoughts?
>>
>> - Sourabh
>
> I've been examining what it would mean to move to 
> for_each_possible_cpu() in crash_prepare_elf64_headers(). I think it 
> means:
>
> - Changing for_each_present_cpu() to for_each_possible_cpu() in 
> crash_prepare_elf64_headers().
> - For kexec_load() syscall path, rewrite the incoming/supplied 
> elfcorehdr immediately on the load with the elfcorehdr generated by 
> crash_prepare_elf64_headers().
> - Eliminate/remove the cpuhp machinery for handling crash hotplug events.

If for_each_present_cpu is replaced with for_each_possible_cpu I still 
need cpuhp machinery
to update FDT kexec segment for CPU hot add case.


>
> This would then setup PT_NOTEs for all possible cpus, which should in 
> theory accommodate crash analyzers that rely on ELF PT_NOTEs for 
> crash_notes.
>
> If staying with for_each_present_cpu() is ultimately decided, then I 
> think leaving the cpuhp machinery in place and each arch could decide 
> how to handle crash cpu hotplug events. The overhead for doing this is 
> very minimal, and the events are likely very infrequent.

I agree. Some architectures may need cpuhp machinery to update kexec 
segment[s] other then elfcorehdr. For example FDT on PowerPC.

- Sourabh Jain
  
Eric DeVolder Feb. 24, 2023, 8:16 p.m. UTC | #16
On 2/24/23 02:34, Sourabh Jain wrote:
> 
> On 24/02/23 02:04, Eric DeVolder wrote:
>>
>>
>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>
>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>
>>>>
>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>> Hello Eric,
>>>>>
>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>
>>>>>>
>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>> Eric!
>>>>>>>
>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>
>>>>>>>> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>>>>>>>>
>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
>>>>>>>> attempts at locating this state failed when inside the STARTING section, so I located
>>>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>
>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
>>>>>>>> placed it at the end of the PREPARE section. This crash hotplug handler is also
>>>>>>>> registered on this state as the callback for the .teardown method.
>>>>>>>
>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>
>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>> {
>>>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>
>>>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>> }
>>>>>>>
>>>>>>> and use this to query the actual state at crash time. That spares all
>>>>>>> those callback heuristics.
>>>>>>>
>>>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>>>>>>> makedumpfile and (the consumer of it all) the userspace crash utility,
>>>>>>>> in order to understand the impact of moving from for_each_present_cpu()
>>>>>>>> to for_each_online_cpu().
>>>>>>>
>>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>>          tglx
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Thomas,
>>>>>> I've investigated the passing of crash notes through the vmcore. What I've learned is that:
>>>>>>
>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its job) does
>>>>>>   not care what the contents of cpu PT_NOTES are, but it does coalesce them together.
>>>>>>
>>>>>> - makedumpfile will count the number of cpu PT_NOTES in order to determine its
>>>>>>   nr_cpus variable, which is reported in a header, but otherwise unused (except
>>>>>>   for sadump method).
>>>>>>
>>>>>> - the crash utility, for the purposes of determining the cpus, does not appear to
>>>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from that, and also of
>>>>>>   course which are online. In addition, when crash does reference the cpu PT_NOTE,
>>>>>>   to get its prstatus, it does so by using a percpu technique directly in the vmcore
>>>>>>   image memory, not via the ELF structure. Said differently, it appears to me that
>>>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it obtains them
>>>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>>>
>>>>>> With this understanding, I did some testing. Perhaps the most telling test was that I
>>>>>> changed the number of cpu PT_NOTEs emitted in the crash_prepare_elf64_headers() to just 1,
>>>>>> hot plugged some cpus, then also took a few offline sparsely via chcpu, then generated a
>>>>>> vmcore. The crash utility had no problem loading the vmcore, it reported the proper number
>>>>>> of cpus and the number offline (despite only one cpu PT_NOTE), and changing to a different
>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>
>>>>>> My take away is that crash utility does not rely upon ELF cpu PT_NOTEs, it obtains the
>>>>>> cpu information directly from kernel data structures. Perhaps at one time crash relied
>>>>>> upon the ELF information, but no more. (Perhaps there are other crash dump analyzers
>>>>>> that might rely on the ELF info?)
>>>>>>
>>>>>> So, all this to say that I see no need to change crash_prepare_elf64_headers(). There
>>>>>> is no compelling reason to move away from for_each_present_cpu(), or modify the list for
>>>>>> online/offline.
>>>>>>
>>>>>> Which then leaves the topic of the cpuhp state on which to register. Perhaps reverting
>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There does not appear to
>>>>>> be a compelling need to accurately track whether the cpu went online/offline for the
>>>>>> purposes of creating the elfcorehdr, as ultimately the crash utility pulls that from
>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>
>>>>>> I think this is what Sourabh has known and has been advocating for an optimization
>>>>>> path that allows not regenerating the elfcorehdr on cpu changes (because all the percpu
>>>>>> structs are all laid out). I do think it best to leave that as an arch choice.
>>>>>
>>>>> Since things are clear on how the PT_NOTES are consumed in kdump kernel [fs/proc/vmcore.c],
>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>
>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>> If yes, can you please list the elfcorehdr components that changes due to CPU hotplug.
>>>> Due to the use of for_each_present_cpu(), it is possible for the number of cpu PT_NOTEs
>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does not impact the
>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>
>>>>>
>>>>>  From what I understood, crash notes are prepared for possible CPUs as system boots and
>>>>> could be used to create a PT_NOTE section for each possible CPU while generating the elfcorehdr
>>>>> during the kdump kernel load.
>>>>>
>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible CPU there is no need to
>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>
>>>> For onlining/offlining of cpus, there is no need to regenerate the elfcorehdr. However,
>>>> for actual hot un/plug of cpus, the answer is yes due to for_each_present_cpu(). The
>>>> caveat here of course is that if crash utility is the only coredump analyzer of concern,
>>>> then it doesn't care about these cpu PT_NOTEs and there would be no need to re-generate them.
>>>>
>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming into mainstream, impacts
>>>> any of this.
>>>>
>>>> Perhaps the one item that might help here is to distinguish between actual hot un/plug of
>>>> cpus, versus onlining/offlining. At the moment, I can not distinguish between a hot plug
>>>> event and an online event (and unplug/offline). If those were distinguishable, then we
>>>> could only regenerate on un/plug events.
>>>>
>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>
>>> Yes, because once elfcorehdr is built with possible CPUs we don't have to worry about
>>> hot[un]plug case.
>>>
>>> Here is my view on how things should be handled if a core-dump analyzer is dependent on
>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>
>>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash notes (kernel has
>>> one crash note per CPU for every possible CPU). Though the crash notes are allocated
>>> during the boot time they are populated when the system is on the crash path.
>>>
>>> This is how crash notes are populated on PowerPC and I am expecting it would be something
>>> similar on other architectures too.
>>>
>>> The crashing CPU sends IPI to every other online CPU with a callback function that updates the
>>> crash notes of that specific CPU. Once the IPI completes the crashing CPU updates its own crash
>>> note and proceeds further.
>>>
>>> The crash notes of CPUs remain uninitialized if the CPUs were offline or hot unplugged at the time
>>> system crash. The core-dump analyzer should be able to identify [un]/initialized crash notes
>>> and display the information accordingly.
>>>
>>> Thoughts?
>>>
>>> - Sourabh
>>
>> I've been examining what it would mean to move to for_each_possible_cpu() in 
>> crash_prepare_elf64_headers(). I think it means:
>>
>> - Changing for_each_present_cpu() to for_each_possible_cpu() in crash_prepare_elf64_headers().
>> - For kexec_load() syscall path, rewrite the incoming/supplied elfcorehdr immediately on the load 
>> with the elfcorehdr generated by crash_prepare_elf64_headers().
>> - Eliminate/remove the cpuhp machinery for handling crash hotplug events.
> 
> If for_each_present_cpu is replaced with for_each_possible_cpu I still need cpuhp machinery
> to update FDT kexec segment for CPU hot add case.

Ah, ok, that's important! So the cpuhp callbacks are still needed.
> 
> 
>>
>> This would then setup PT_NOTEs for all possible cpus, which should in theory accommodate crash 
>> analyzers that rely on ELF PT_NOTEs for crash_notes.
>>
>> If staying with for_each_present_cpu() is ultimately decided, then I think leaving the cpuhp 
>> machinery in place and each arch could decide how to handle crash cpu hotplug events. The overhead 
>> for doing this is very minimal, and the events are likely very infrequent.
> 
> I agree. Some architectures may need cpuhp machinery to update kexec segment[s] other then 
> elfcorehdr. For example FDT on PowerPC.
> 
> - Sourabh Jain

OK, I was thinking that the desire was to eliminate the cpuhp callbacks. In reality, the desire is 
to change to for_each_possible_cpu(). Given that the kernel creates crash_notes for all possible 
cpus upon kernel boot, there seems to be no reason to not do this?

HOWEVER...

It's not clear to me that this particular change needs to be part of this series. It's inclusion 
would facilitate PPC support, but doesn't "solve" anything in general. In fact it causes kexec_load 
and kexec_file_load to deviate (kexec_load via userspace kexec does the equivalent of 
for_each_present_cpu() where as with this change kexec_file_load would do for_each_possible_cpu(); 
until a hot plug event then both would do for_each_possible_cpu()). And if this change were to 
arrive as part of Sourabh's PPC support, then it does not appear to impact x86 (not sure about other 
arches). And the 'crash' dump analyzer doesn't care either way.

Including this change would enable an optimization path (for x86 at least) that short-circuits cpu 
hotplug changes in the arch crash handler, for example:

diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index aca3f1817674..0883f6b11de4 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -473,6 +473,11 @@ void arch_crash_handle_hotplug_event(struct kimage *image)
     unsigned long mem, memsz;
     unsigned long elfsz = 0;

+   if (image->file_mode && (
+       image->hp_action == KEXEC_CRASH_HP_ADD_CPU ||
+       image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU))
+       return;
+
     /*
      * Create the new elfcorehdr reflecting the changes to CPU and/or
      * memory resources.

I'm not sure that is compelling given the infrequent nature of cpu hotplug events.

In my mind I still have a question about kexec_load() path. The userspace kexec can not do the 
equivalent of for_each_possible_cpu(). It can obtain max possible cpus from 
/sys/devices/system/cpu/possible, but for those cpus not present the /sys/devices/system/cpu/cpuXX 
is not available and so the crash_notes entries is not available. My attempts to expose all cpuXX 
lead to odd behavior that was requiring changes in ACPI and arch code that looked untenable.

There seem to be these options available for kexec_load() path:
- immediately rewrite the elfcorehdr upon load via a call to crash_prepare_elf64_headers(). I've 
made this work with the following, as proof of concept:

diff --git a/kernel/kexec.c b/kernel/kexec.c
index cb8e6e6f983c..4eb201270f97 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -163,6 +163,12 @@ static int do_kexec_load(unsigned long entry, unsigned long
     kimage_free(image);
  out_unlock:
     kexec_unlock();
+   if (IS_ENABLED(CONFIG_CRASH_HOTPLUG)) {
+       if ((flags & KEXEC_ON_CRASH) && kexec_crash_image) {
+           crash_handle_hotplug_event(KEXEC_CRASH_HP_NONE, KEXEC_CRASH_HP_INVALID_CPU);
+       }
+   }
     return ret;
  }

- Another option is spend the time to determine whether exposing all cpuXX is a viable solution; I 
have no idea what impacts to userspace would be for possible-but-not-yet-present cpuXX entries would 
be. It might also mean requiring a 'present' entry available within the cpuXX.

- Another option is to simply let the hot plug events rewrite the elfcorehdr on demand. This is what 
I've originally put forth, but not sure how this impacts PPC given for_each_possible_cpu() change.

The concern is that today, both kexec_load and kexec_file_load mirror each other with respect to 
for_each_present_cpu(); that is userspace kexec is able to generate the elfcorehdr the same as would 
kexec_file_load, for cpus. But by changing to for_each_possible_cpu(), the two would deviate.

Thoughts?
eric
  
Sourabh Jain Feb. 27, 2023, 6:11 a.m. UTC | #17
On 25/02/23 01:46, Eric DeVolder wrote:
>
>
> On 2/24/23 02:34, Sourabh Jain wrote:
>>
>> On 24/02/23 02:04, Eric DeVolder wrote:
>>>
>>>
>>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>>
>>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>>
>>>>>
>>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>>> Hello Eric,
>>>>>>
>>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>>> Eric!
>>>>>>>>
>>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>>
>>>>>>>>> So my latest solution is introduce two new CPUHP states, 
>>>>>>>>> CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. 
>>>>>>>>> I'm open to better names.
>>>>>>>>>
>>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after 
>>>>>>>>> CPUHP_BRINGUP_CPU. My
>>>>>>>>> attempts at locating this state failed when inside the 
>>>>>>>>> STARTING section, so I located
>>>>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler 
>>>>>>>>> is registered on
>>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>>
>>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before 
>>>>>>>>> CPUHP_TEARDOWN_CPU, and I
>>>>>>>>> placed it at the end of the PREPARE section. This crash 
>>>>>>>>> hotplug handler is also
>>>>>>>>> registered on this state as the callback for the .teardown 
>>>>>>>>> method.
>>>>>>>>
>>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>>
>>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>>> {
>>>>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>>
>>>>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>>> }
>>>>>>>>
>>>>>>>> and use this to query the actual state at crash time. That 
>>>>>>>> spares all
>>>>>>>> those callback heuristics.
>>>>>>>>
>>>>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, 
>>>>>>>>> vmcoreinfo,
>>>>>>>>> makedumpfile and (the consumer of it all) the userspace crash 
>>>>>>>>> utility,
>>>>>>>>> in order to understand the impact of moving from 
>>>>>>>>> for_each_present_cpu()
>>>>>>>>> to for_each_online_cpu().
>>>>>>>>
>>>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>>          tglx
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> Thomas,
>>>>>>> I've investigated the passing of crash notes through the vmcore. 
>>>>>>> What I've learned is that:
>>>>>>>
>>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do 
>>>>>>> its job) does
>>>>>>>   not care what the contents of cpu PT_NOTES are, but it does 
>>>>>>> coalesce them together.
>>>>>>>
>>>>>>> - makedumpfile will count the number of cpu PT_NOTES in order to 
>>>>>>> determine its
>>>>>>>   nr_cpus variable, which is reported in a header, but otherwise 
>>>>>>> unused (except
>>>>>>>   for sadump method).
>>>>>>>
>>>>>>> - the crash utility, for the purposes of determining the cpus, 
>>>>>>> does not appear to
>>>>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from 
>>>>>>> that, and also of
>>>>>>>   course which are online. In addition, when crash does 
>>>>>>> reference the cpu PT_NOTE,
>>>>>>>   to get its prstatus, it does so by using a percpu technique 
>>>>>>> directly in the vmcore
>>>>>>>   image memory, not via the ELF structure. Said differently, it 
>>>>>>> appears to me that
>>>>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; 
>>>>>>> rather it obtains them
>>>>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>>>>
>>>>>>> With this understanding, I did some testing. Perhaps the most 
>>>>>>> telling test was that I
>>>>>>> changed the number of cpu PT_NOTEs emitted in the 
>>>>>>> crash_prepare_elf64_headers() to just 1,
>>>>>>> hot plugged some cpus, then also took a few offline sparsely via 
>>>>>>> chcpu, then generated a
>>>>>>> vmcore. The crash utility had no problem loading the vmcore, it 
>>>>>>> reported the proper number
>>>>>>> of cpus and the number offline (despite only one cpu PT_NOTE), 
>>>>>>> and changing to a different
>>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>>
>>>>>>> My take away is that crash utility does not rely upon ELF cpu 
>>>>>>> PT_NOTEs, it obtains the
>>>>>>> cpu information directly from kernel data structures. Perhaps at 
>>>>>>> one time crash relied
>>>>>>> upon the ELF information, but no more. (Perhaps there are other 
>>>>>>> crash dump analyzers
>>>>>>> that might rely on the ELF info?)
>>>>>>>
>>>>>>> So, all this to say that I see no need to change 
>>>>>>> crash_prepare_elf64_headers(). There
>>>>>>> is no compelling reason to move away from 
>>>>>>> for_each_present_cpu(), or modify the list for
>>>>>>> online/offline.
>>>>>>>
>>>>>>> Which then leaves the topic of the cpuhp state on which to 
>>>>>>> register. Perhaps reverting
>>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. 
>>>>>>> There does not appear to
>>>>>>> be a compelling need to accurately track whether the cpu went 
>>>>>>> online/offline for the
>>>>>>> purposes of creating the elfcorehdr, as ultimately the crash 
>>>>>>> utility pulls that from
>>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>>
>>>>>>> I think this is what Sourabh has known and has been advocating 
>>>>>>> for an optimization
>>>>>>> path that allows not regenerating the elfcorehdr on cpu changes 
>>>>>>> (because all the percpu
>>>>>>> structs are all laid out). I do think it best to leave that as 
>>>>>>> an arch choice.
>>>>>>
>>>>>> Since things are clear on how the PT_NOTES are consumed in kdump 
>>>>>> kernel [fs/proc/vmcore.c],
>>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>>
>>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>>> If yes, can you please list the elfcorehdr components that 
>>>>>> changes due to CPU hotplug.
>>>>> Due to the use of for_each_present_cpu(), it is possible for the 
>>>>> number of cpu PT_NOTEs
>>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus 
>>>>> does not impact the
>>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>>
>>>>>>
>>>>>>  From what I understood, crash notes are prepared for possible 
>>>>>> CPUs as system boots and
>>>>>> could be used to create a PT_NOTE section for each possible CPU 
>>>>>> while generating the elfcorehdr
>>>>>> during the kdump kernel load.
>>>>>>
>>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every 
>>>>>> possible CPU there is no need to
>>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>>
>>>>> For onlining/offlining of cpus, there is no need to regenerate the 
>>>>> elfcorehdr. However,
>>>>> for actual hot un/plug of cpus, the answer is yes due to 
>>>>> for_each_present_cpu(). The
>>>>> caveat here of course is that if crash utility is the only 
>>>>> coredump analyzer of concern,
>>>>> then it doesn't care about these cpu PT_NOTEs and there would be 
>>>>> no need to re-generate them.
>>>>>
>>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming 
>>>>> into mainstream, impacts
>>>>> any of this.
>>>>>
>>>>> Perhaps the one item that might help here is to distinguish 
>>>>> between actual hot un/plug of
>>>>> cpus, versus onlining/offlining. At the moment, I can not 
>>>>> distinguish between a hot plug
>>>>> event and an online event (and unplug/offline). If those were 
>>>>> distinguishable, then we
>>>>> could only regenerate on un/plug events.
>>>>>
>>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>>
>>>> Yes, because once elfcorehdr is built with possible CPUs we don't 
>>>> have to worry about
>>>> hot[un]plug case.
>>>>
>>>> Here is my view on how things should be handled if a core-dump 
>>>> analyzer is dependent on
>>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>>
>>>> A PT_NOTE in elfcorehdr holds the address of the corresponding 
>>>> crash notes (kernel has
>>>> one crash note per CPU for every possible CPU). Though the crash 
>>>> notes are allocated
>>>> during the boot time they are populated when the system is on the 
>>>> crash path.
>>>>
>>>> This is how crash notes are populated on PowerPC and I am expecting 
>>>> it would be something
>>>> similar on other architectures too.
>>>>
>>>> The crashing CPU sends IPI to every other online CPU with a 
>>>> callback function that updates the
>>>> crash notes of that specific CPU. Once the IPI completes the 
>>>> crashing CPU updates its own crash
>>>> note and proceeds further.
>>>>
>>>> The crash notes of CPUs remain uninitialized if the CPUs were 
>>>> offline or hot unplugged at the time
>>>> system crash. The core-dump analyzer should be able to identify 
>>>> [un]/initialized crash notes
>>>> and display the information accordingly.
>>>>
>>>> Thoughts?
>>>>
>>>> - Sourabh
>>>
>>> I've been examining what it would mean to move to 
>>> for_each_possible_cpu() in crash_prepare_elf64_headers(). I think it 
>>> means:
>>>
>>> - Changing for_each_present_cpu() to for_each_possible_cpu() in 
>>> crash_prepare_elf64_headers().
>>> - For kexec_load() syscall path, rewrite the incoming/supplied 
>>> elfcorehdr immediately on the load with the elfcorehdr generated by 
>>> crash_prepare_elf64_headers().
>>> - Eliminate/remove the cpuhp machinery for handling crash hotplug 
>>> events.
>>
>> If for_each_present_cpu is replaced with for_each_possible_cpu I 
>> still need cpuhp machinery
>> to update FDT kexec segment for CPU hot add case.
>
> Ah, ok, that's important! So the cpuhp callbacks are still needed.
>>
>>
>>>
>>> This would then setup PT_NOTEs for all possible cpus, which should 
>>> in theory accommodate crash analyzers that rely on ELF PT_NOTEs for 
>>> crash_notes.
>>>
>>> If staying with for_each_present_cpu() is ultimately decided, then I 
>>> think leaving the cpuhp machinery in place and each arch could 
>>> decide how to handle crash cpu hotplug events. The overhead for 
>>> doing this is very minimal, and the events are likely very infrequent.
>>
>> I agree. Some architectures may need cpuhp machinery to update kexec 
>> segment[s] other then elfcorehdr. For example FDT on PowerPC.
>>
>> - Sourabh Jain
>
> OK, I was thinking that the desire was to eliminate the cpuhp 
> callbacks. In reality, the desire is to change to 
> for_each_possible_cpu(). Given that the kernel creates crash_notes for 
> all possible cpus upon kernel boot, there seems to be no reason to not 
> do this?
>
> HOWEVER...
>
> It's not clear to me that this particular change needs to be part of 
> this series. It's inclusion would facilitate PPC support, but doesn't 
> "solve" anything in general. In fact it causes kexec_load and 
> kexec_file_load to deviate (kexec_load via userspace kexec does the 
> equivalent of for_each_present_cpu() where as with this change 
> kexec_file_load would do for_each_possible_cpu(); until a hot plug 
> event then both would do for_each_possible_cpu()). And if this change 
> were to arrive as part of Sourabh's PPC support, then it does not 
> appear to impact x86 (not sure about other arches). And the 'crash' 
> dump analyzer doesn't care either way.
>
> Including this change would enable an optimization path (for x86 at 
> least) that short-circuits cpu hotplug changes in the arch crash 
> handler, for example:
>
> diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
> index aca3f1817674..0883f6b11de4 100644
> --- a/arch/x86/kernel/crash.c
> +++ b/arch/x86/kernel/crash.c
> @@ -473,6 +473,11 @@ void arch_crash_handle_hotplug_event(struct 
> kimage *image)
>     unsigned long mem, memsz;
>     unsigned long elfsz = 0;
>
> +   if (image->file_mode && (
> +       image->hp_action == KEXEC_CRASH_HP_ADD_CPU ||
> +       image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU))
> +       return;
> +
>     /*
>      * Create the new elfcorehdr reflecting the changes to CPU and/or
>      * memory resources.
>
> I'm not sure that is compelling given the infrequent nature of cpu 
> hotplug events.
It certainly closes/reduces the window where kdump is not active due 
kexec segment update.|

>
> In my mind I still have a question about kexec_load() path. The 
> userspace kexec can not do the equivalent of for_each_possible_cpu(). 
> It can obtain max possible cpus from /sys/devices/system/cpu/possible, 
> but for those cpus not present the /sys/devices/system/cpu/cpuXX is 
> not available and so the crash_notes entries is not available. My 
> attempts to expose all cpuXX lead to odd behavior that was requiring 
> changes in ACPI and arch code that looked untenable.
>
> There seem to be these options available for kexec_load() path:
> - immediately rewrite the elfcorehdr upon load via a call to 
> crash_prepare_elf64_headers(). I've made this work with the following, 
> as proof of concept:
Yes regenerating/patching the elfcorehdr could be an option for 
kexec_load syscall.

>
> diff --git a/kernel/kexec.c b/kernel/kexec.c
> index cb8e6e6f983c..4eb201270f97 100644
> --- a/kernel/kexec.c
> +++ b/kernel/kexec.c
> @@ -163,6 +163,12 @@ static int do_kexec_load(unsigned long entry, 
> unsigned long
>     kimage_free(image);
>  out_unlock:
>     kexec_unlock();
> +   if (IS_ENABLED(CONFIG_CRASH_HOTPLUG)) {
> +       if ((flags & KEXEC_ON_CRASH) && kexec_crash_image) {
> +           crash_handle_hotplug_event(KEXEC_CRASH_HP_NONE, 
> KEXEC_CRASH_HP_INVALID_CPU);
> +       }
> +   }
>     return ret;
>  }
>
> - Another option is spend the time to determine whether exposing all 
> cpuXX is a viable solution; I have no idea what impacts to userspace 
> would be for possible-but-not-yet-present cpuXX entries would be. It 
> might also mean requiring a 'present' entry available within the cpuXX.
>
> - Another option is to simply let the hot plug events rewrite the 
> elfcorehdr on demand. This is what I've originally put forth, but not 
> sure how this impacts PPC given for_each_possible_cpu() change.
Given that /sys/devices/system/cpu/cpuXX is not present for 
possbile-but-not-yet-present CPUs, I am wondering do we even have crash 
notes for possible CPUs on x86?
>
> The concern is that today, both kexec_load and kexec_file_load mirror 
> each other with respect to for_each_present_cpu(); that is userspace 
> kexec is able to generate the elfcorehdr the same as would 
> kexec_file_load, for cpus. But by changing to for_each_possible_cpu(), 
> the two would deviate.

Thanks,
Sourabh Jain
  
Baoquan He Feb. 28, 2023, 12:44 p.m. UTC | #18
On 02/13/23 at 10:10am, Sourabh Jain wrote:
> 
> On 11/02/23 06:05, Eric DeVolder wrote:
> > 
> > 
> > On 2/10/23 00:29, Sourabh Jain wrote:
> > > 
> > > On 10/02/23 01:09, Eric DeVolder wrote:
> > > > 
> > > > 
> > > > On 2/9/23 12:43, Sourabh Jain wrote:
> > > > > Hello Eric,
> > > > > 
> > > > > On 09/02/23 23:01, Eric DeVolder wrote:
> > > > > > 
> > > > > > 
> > > > > > On 2/8/23 07:44, Thomas Gleixner wrote:
> > > > > > > Eric!
> > > > > > > 
> > > > > > > On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
> > > > > > > > On 2/1/23 05:33, Thomas Gleixner wrote:
> > > > > > > > 
> > > > > > > > So my latest solution is introduce two new CPUHP
> > > > > > > > states, CPUHP_AP_ELFCOREHDR_ONLINE
> > > > > > > > for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for
> > > > > > > > offlining. I'm open to better names.
> > > > > > > > 
> > > > > > > > The CPUHP_AP_ELFCOREHDR_ONLINE needs to be
> > > > > > > > placed after CPUHP_BRINGUP_CPU. My
> > > > > > > > attempts at locating this state failed when
> > > > > > > > inside the STARTING section, so I located
> > > > > > > > this just inside the ONLINE sectoin. The crash
> > > > > > > > hotplug handler is registered on
> > > > > > > > this state as the callback for the .startup method.
> > > > > > > > 
> > > > > > > > The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be
> > > > > > > > placed before CPUHP_TEARDOWN_CPU, and I
> > > > > > > > placed it at the end of the PREPARE section.
> > > > > > > > This crash hotplug handler is also
> > > > > > > > registered on this state as the callback for the .teardown method.
> > > > > > > 
> > > > > > > TBH, that's still overengineered. Something like this:
> > > > > > > 
> > > > > > > bool cpu_is_alive(unsigned int cpu)
> > > > > > > {
> > > > > > >     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
> > > > > > > 
> > > > > > >     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
> > > > > > > }
> > > > > > > 
> > > > > > > and use this to query the actual state at crash
> > > > > > > time. That spares all
> > > > > > > those callback heuristics.
> > > > > > > 
> > > > > > > > I'm making my way though percpu crash_notes,
> > > > > > > > elfcorehdr, vmcoreinfo,
> > > > > > > > makedumpfile and (the consumer of it all) the
> > > > > > > > userspace crash utility,
> > > > > > > > in order to understand the impact of moving from
> > > > > > > > for_each_present_cpu()
> > > > > > > > to for_each_online_cpu().
> > > > > > > 
> > > > > > > Is the packing actually worth the trouble? What's the actual win?
> > > > > > > 
> > > > > > > Thanks,
> > > > > > > 
> > > > > > >          tglx
> > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > > Thomas,
> > > > > > I've investigated the passing of crash notes through the
> > > > > > vmcore. What I've learned is that:
> > > > > > 
> > > > > > - linux/fs/proc/vmcore.c (which makedumpfile references
> > > > > > to do its job) does
> > > > > >   not care what the contents of cpu PT_NOTES are, but it
> > > > > > does coalesce them together.
> > > > > > 
> > > > > > - makedumpfile will count the number of cpu PT_NOTES in
> > > > > > order to determine its
> > > > > >   nr_cpus variable, which is reported in a header, but
> > > > > > otherwise unused (except
> > > > > >   for sadump method).
> > > > > > 
> > > > > > - the crash utility, for the purposes of determining the
> > > > > > cpus, does not appear to
> > > > > >   reference the elfcorehdr PT_NOTEs. Instead it locates the various
> > > > > >   cpu_[possible|present|online]_mask and computes
> > > > > > nr_cpus from that, and also of
> > > > > >   course which are online. In addition, when crash does
> > > > > > reference the cpu PT_NOTE,
> > > > > >   to get its prstatus, it does so by using a percpu
> > > > > > technique directly in the vmcore
> > > > > >   image memory, not via the ELF structure. Said
> > > > > > differently, it appears to me that
> > > > > >   crash utility doesn't rely on the ELF PT_NOTEs for
> > > > > > cpus; rather it obtains them
> > > > > >   via kernel cpumasks and the memory within the vmcore.
> > > > > > 
> > > > > > With this understanding, I did some testing. Perhaps the
> > > > > > most telling test was that I
> > > > > > changed the number of cpu PT_NOTEs emitted in the
> > > > > > crash_prepare_elf64_headers() to just 1,
> > > > > > hot plugged some cpus, then also took a few offline
> > > > > > sparsely via chcpu, then generated a
> > > > > > vmcore. The crash utility had no problem loading the
> > > > > > vmcore, it reported the proper number
> > > > > > of cpus and the number offline (despite only one cpu
> > > > > > PT_NOTE), and changing to a different
> > > > > > cpu via 'set -c 30' and the backtrace was completely valid.
> > > > > > 
> > > > > > My take away is that crash utility does not rely upon
> > > > > > ELF cpu PT_NOTEs, it obtains the
> > > > > > cpu information directly from kernel data structures.
> > > > > > Perhaps at one time crash relied
> > > > > > upon the ELF information, but no more. (Perhaps there
> > > > > > are other crash dump analyzers
> > > > > > that might rely on the ELF info?)
> > > > > > 
> > > > > > So, all this to say that I see no need to change
> > > > > > crash_prepare_elf64_headers(). There
> > > > > > is no compelling reason to move away from
> > > > > > for_each_present_cpu(), or modify the list for
> > > > > > online/offline.
> > > > > > 
> > > > > > Which then leaves the topic of the cpuhp state on which
> > > > > > to register. Perhaps reverting
> > > > > > back to the use of CPUHP_BP_PREPARE_DYN is the right
> > > > > > answer. There does not appear to
> > > > > > be a compelling need to accurately track whether the cpu
> > > > > > went online/offline for the
> > > > > > purposes of creating the elfcorehdr, as ultimately the
> > > > > > crash utility pulls that from
> > > > > > kernel data structures, not the elfcorehdr.
> > > > > > 
> > > > > > I think this is what Sourabh has known and has been
> > > > > > advocating for an optimization
> > > > > > path that allows not regenerating the elfcorehdr on cpu
> > > > > > changes (because all the percpu
> > > > > > structs are all laid out). I do think it best to leave
> > > > > > that as an arch choice.
> > > > > 
> > > > > Since things are clear on how the PT_NOTES are consumed in
> > > > > kdump kernel [fs/proc/vmcore.c],
> > > > > makedumpfile, and crash tool I need your opinion on this:
> > > > > 
> > > > > Do we really need to regenerate elfcorehdr for CPU hotplug events?
> > > > > If yes, can you please list the elfcorehdr components that
> > > > > changes due to CPU hotplug.
> > > > Due to the use of for_each_present_cpu(), it is possible for the
> > > > number of cpu PT_NOTEs
> > > > to fluctuate as cpus are un/plugged. Onlining/offlining of cpus
> > > > does not impact the
> > > > number of cpu PT_NOTEs (as the cpus are still present).
> > > > 
> > > > > 
> > > > >  From what I understood, crash notes are prepared for
> > > > > possible CPUs as system boots and
> > > > > could be used to create a PT_NOTE section for each possible
> > > > > CPU while generating the elfcorehdr
> > > > > during the kdump kernel load.
> > > > > 
> > > > > Now once the elfcorehdr is loaded with PT_NOTEs for every
> > > > > possible CPU there is no need to
> > > > > regenerate it for CPU hotplug events. Or do we?
> > > > 
> > > > For onlining/offlining of cpus, there is no need to regenerate
> > > > the elfcorehdr. However,
> > > > for actual hot un/plug of cpus, the answer is yes due to
> > > > for_each_present_cpu(). The
> > > > caveat here of course is that if crash utility is the only
> > > > coredump analyzer of concern,
> > > > then it doesn't care about these cpu PT_NOTEs and there would be
> > > > no need to re-generate them.
> > > > 
> > > > Also, I'm not sure if ARM cpu hotplug, which is just now coming
> > > > into mainstream, impacts
> > > > any of this.
> > > > 
> > > > Perhaps the one item that might help here is to distinguish
> > > > between actual hot un/plug of
> > > > cpus, versus onlining/offlining. At the moment, I can not
> > > > distinguish between a hot plug
> > > > event and an online event (and unplug/offline). If those were
> > > > distinguishable, then we
> > > > could only regenerate on un/plug events.
> > > > 
> > > > Or perhaps moving to for_each_possible_cpu() is the better choice?
> > > 
> > > Yes, because once elfcorehdr is built with possible CPUs we don't
> > > have to worry about
> > > hot[un]plug case.
> > > 
> > > Here is my view on how things should be handled if a core-dump
> > > analyzer is dependent on
> > > elfcorehdr PT_NOTEs to find online/offline CPUs.
> > > 
> > > A PT_NOTE in elfcorehdr holds the address of the corresponding crash
> > > notes (kernel has
> > > one crash note per CPU for every possible CPU). Though the crash
> > > notes are allocated
> > > during the boot time they are populated when the system is on the
> > > crash path.
> > > 
> > > This is how crash notes are populated on PowerPC and I am expecting
> > > it would be something
> > > similar on other architectures too.
> > > 
> > > The crashing CPU sends IPI to every other online CPU with a callback
> > > function that updates the
> > > crash notes of that specific CPU. Once the IPI completes the
> > > crashing CPU updates its own crash
> > > note and proceeds further.
> > > 
> > > The crash notes of CPUs remain uninitialized if the CPUs were
> > > offline or hot unplugged at the time
> > > system crash. The core-dump analyzer should be able to identify
> > > [un]/initialized crash notes
> > > and display the information accordingly.
> > > 
> > > Thoughts?
> > > 
> > > - Sourabh
> > 
> > In general, I agree with your points. You've presented a strong case to
> > go with for_each_possible_cpu() in crash_prepare_elf64_headers() and
> > those crash notes would always be present, and we can ignore changes to
> > cpus wrt/ elfcorehdr updates.
> > 
> > But what do we do about kexec_load() syscall? The way the userspace
> > utility works is it determines cpus by:
> >  nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
> > which is not the equivalent of possible_cpus. So the complete list of
> > cpu PT_NOTEs is not generated up front. We would need a solution for
> > that?
> Hello Eric,
> 
> The sysconf document says _SC_NPROCESSORS_CONF is processors configured,
> isn't that equivalent to possible CPUs?
> 
> What exactly sysconf(_SC_NPROCESSORS_CONF) returns on x86? IIUC, on powerPC
> it is possible CPUs.

From sysconf man page, with my understanding, _SC_NPROCESSORS_CONF is
returning the possible cpus, while _SC_NPROCESSORS_ONLN returns present
cpus. If these are true, we can use them.

But I am wondering why the existing present cpu way is going to be
discarded. Sorry, I tried to go through this thread, it's too long, can
anyone summarize the reason with shorter and clear sentences. Sorry
again for that.

> 
> In case sysconf(_SC_NPROCESSORS_CONF) is not consistent then we can go with:
> /sys/devices/system/cpu/possible for kexec_load case.
> 
> Thoughts?
> 
> - Sourabh Jain
>
  
Eric DeVolder Feb. 28, 2023, 6:52 p.m. UTC | #19
On 2/28/23 06:44, Baoquan He wrote:
> On 02/13/23 at 10:10am, Sourabh Jain wrote:
>>
>> On 11/02/23 06:05, Eric DeVolder wrote:
>>>
>>>
>>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>>
>>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>>
>>>>>
>>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>>> Hello Eric,
>>>>>>
>>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>>> Eric!
>>>>>>>>
>>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>>
>>>>>>>>> So my latest solution is introduce two new CPUHP
>>>>>>>>> states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for
>>>>>>>>> offlining. I'm open to better names.
>>>>>>>>>
>>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be
>>>>>>>>> placed after CPUHP_BRINGUP_CPU. My
>>>>>>>>> attempts at locating this state failed when
>>>>>>>>> inside the STARTING section, so I located
>>>>>>>>> this just inside the ONLINE sectoin. The crash
>>>>>>>>> hotplug handler is registered on
>>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>>
>>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be
>>>>>>>>> placed before CPUHP_TEARDOWN_CPU, and I
>>>>>>>>> placed it at the end of the PREPARE section.
>>>>>>>>> This crash hotplug handler is also
>>>>>>>>> registered on this state as the callback for the .teardown method.
>>>>>>>>
>>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>>
>>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>>> {
>>>>>>>>      struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>>
>>>>>>>>      return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>>> }
>>>>>>>>
>>>>>>>> and use this to query the actual state at crash
>>>>>>>> time. That spares all
>>>>>>>> those callback heuristics.
>>>>>>>>
>>>>>>>>> I'm making my way though percpu crash_notes,
>>>>>>>>> elfcorehdr, vmcoreinfo,
>>>>>>>>> makedumpfile and (the consumer of it all) the
>>>>>>>>> userspace crash utility,
>>>>>>>>> in order to understand the impact of moving from
>>>>>>>>> for_each_present_cpu()
>>>>>>>>> to for_each_online_cpu().
>>>>>>>>
>>>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>>           tglx
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> Thomas,
>>>>>>> I've investigated the passing of crash notes through the
>>>>>>> vmcore. What I've learned is that:
>>>>>>>
>>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references
>>>>>>> to do its job) does
>>>>>>>    not care what the contents of cpu PT_NOTES are, but it
>>>>>>> does coalesce them together.
>>>>>>>
>>>>>>> - makedumpfile will count the number of cpu PT_NOTES in
>>>>>>> order to determine its
>>>>>>>    nr_cpus variable, which is reported in a header, but
>>>>>>> otherwise unused (except
>>>>>>>    for sadump method).
>>>>>>>
>>>>>>> - the crash utility, for the purposes of determining the
>>>>>>> cpus, does not appear to
>>>>>>>    reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>>>    cpu_[possible|present|online]_mask and computes
>>>>>>> nr_cpus from that, and also of
>>>>>>>    course which are online. In addition, when crash does
>>>>>>> reference the cpu PT_NOTE,
>>>>>>>    to get its prstatus, it does so by using a percpu
>>>>>>> technique directly in the vmcore
>>>>>>>    image memory, not via the ELF structure. Said
>>>>>>> differently, it appears to me that
>>>>>>>    crash utility doesn't rely on the ELF PT_NOTEs for
>>>>>>> cpus; rather it obtains them
>>>>>>>    via kernel cpumasks and the memory within the vmcore.
>>>>>>>
>>>>>>> With this understanding, I did some testing. Perhaps the
>>>>>>> most telling test was that I
>>>>>>> changed the number of cpu PT_NOTEs emitted in the
>>>>>>> crash_prepare_elf64_headers() to just 1,
>>>>>>> hot plugged some cpus, then also took a few offline
>>>>>>> sparsely via chcpu, then generated a
>>>>>>> vmcore. The crash utility had no problem loading the
>>>>>>> vmcore, it reported the proper number
>>>>>>> of cpus and the number offline (despite only one cpu
>>>>>>> PT_NOTE), and changing to a different
>>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>>
>>>>>>> My take away is that crash utility does not rely upon
>>>>>>> ELF cpu PT_NOTEs, it obtains the
>>>>>>> cpu information directly from kernel data structures.
>>>>>>> Perhaps at one time crash relied
>>>>>>> upon the ELF information, but no more. (Perhaps there
>>>>>>> are other crash dump analyzers
>>>>>>> that might rely on the ELF info?)
>>>>>>>
>>>>>>> So, all this to say that I see no need to change
>>>>>>> crash_prepare_elf64_headers(). There
>>>>>>> is no compelling reason to move away from
>>>>>>> for_each_present_cpu(), or modify the list for
>>>>>>> online/offline.
>>>>>>>
>>>>>>> Which then leaves the topic of the cpuhp state on which
>>>>>>> to register. Perhaps reverting
>>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right
>>>>>>> answer. There does not appear to
>>>>>>> be a compelling need to accurately track whether the cpu
>>>>>>> went online/offline for the
>>>>>>> purposes of creating the elfcorehdr, as ultimately the
>>>>>>> crash utility pulls that from
>>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>>
>>>>>>> I think this is what Sourabh has known and has been
>>>>>>> advocating for an optimization
>>>>>>> path that allows not regenerating the elfcorehdr on cpu
>>>>>>> changes (because all the percpu
>>>>>>> structs are all laid out). I do think it best to leave
>>>>>>> that as an arch choice.
>>>>>>
>>>>>> Since things are clear on how the PT_NOTES are consumed in
>>>>>> kdump kernel [fs/proc/vmcore.c],
>>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>>
>>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>>> If yes, can you please list the elfcorehdr components that
>>>>>> changes due to CPU hotplug.
>>>>> Due to the use of for_each_present_cpu(), it is possible for the
>>>>> number of cpu PT_NOTEs
>>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus
>>>>> does not impact the
>>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>>
>>>>>>
>>>>>>   From what I understood, crash notes are prepared for
>>>>>> possible CPUs as system boots and
>>>>>> could be used to create a PT_NOTE section for each possible
>>>>>> CPU while generating the elfcorehdr
>>>>>> during the kdump kernel load.
>>>>>>
>>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every
>>>>>> possible CPU there is no need to
>>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>>
>>>>> For onlining/offlining of cpus, there is no need to regenerate
>>>>> the elfcorehdr. However,
>>>>> for actual hot un/plug of cpus, the answer is yes due to
>>>>> for_each_present_cpu(). The
>>>>> caveat here of course is that if crash utility is the only
>>>>> coredump analyzer of concern,
>>>>> then it doesn't care about these cpu PT_NOTEs and there would be
>>>>> no need to re-generate them.
>>>>>
>>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming
>>>>> into mainstream, impacts
>>>>> any of this.
>>>>>
>>>>> Perhaps the one item that might help here is to distinguish
>>>>> between actual hot un/plug of
>>>>> cpus, versus onlining/offlining. At the moment, I can not
>>>>> distinguish between a hot plug
>>>>> event and an online event (and unplug/offline). If those were
>>>>> distinguishable, then we
>>>>> could only regenerate on un/plug events.
>>>>>
>>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>>
>>>> Yes, because once elfcorehdr is built with possible CPUs we don't
>>>> have to worry about
>>>> hot[un]plug case.
>>>>
>>>> Here is my view on how things should be handled if a core-dump
>>>> analyzer is dependent on
>>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>>
>>>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash
>>>> notes (kernel has
>>>> one crash note per CPU for every possible CPU). Though the crash
>>>> notes are allocated
>>>> during the boot time they are populated when the system is on the
>>>> crash path.
>>>>
>>>> This is how crash notes are populated on PowerPC and I am expecting
>>>> it would be something
>>>> similar on other architectures too.
>>>>
>>>> The crashing CPU sends IPI to every other online CPU with a callback
>>>> function that updates the
>>>> crash notes of that specific CPU. Once the IPI completes the
>>>> crashing CPU updates its own crash
>>>> note and proceeds further.
>>>>
>>>> The crash notes of CPUs remain uninitialized if the CPUs were
>>>> offline or hot unplugged at the time
>>>> system crash. The core-dump analyzer should be able to identify
>>>> [un]/initialized crash notes
>>>> and display the information accordingly.
>>>>
>>>> Thoughts?
>>>>
>>>> - Sourabh
>>>
>>> In general, I agree with your points. You've presented a strong case to
>>> go with for_each_possible_cpu() in crash_prepare_elf64_headers() and
>>> those crash notes would always be present, and we can ignore changes to
>>> cpus wrt/ elfcorehdr updates.
>>>
>>> But what do we do about kexec_load() syscall? The way the userspace
>>> utility works is it determines cpus by:
>>>   nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
>>> which is not the equivalent of possible_cpus. So the complete list of
>>> cpu PT_NOTEs is not generated up front. We would need a solution for
>>> that?
>> Hello Eric,
>>
>> The sysconf document says _SC_NPROCESSORS_CONF is processors configured,
>> isn't that equivalent to possible CPUs?
>>
>> What exactly sysconf(_SC_NPROCESSORS_CONF) returns on x86? IIUC, on powerPC
>> it is possible CPUs.
> 
Baoquan,

>  From sysconf man page, with my understanding, _SC_NPROCESSORS_CONF is
> returning the possible cpus, while _SC_NPROCESSORS_ONLN returns present
> cpus. If these are true, we can use them.

Thomas Gleixner has pointed out that:

  glibc tries to evaluate that in the following order:
   1) /sys/devices/system/cpu/cpu*
      That's present CPUs not possible CPUs
   2) /proc/stat
      That's online CPUs
   3) sched_getaffinity()
      That's online CPUs at best. In the worst case it's an affinity mask
      which is set on a process group

meaning that _SC_NPROCESSORS_CONF is not equivalent to possible_cpus(). Furthermore, the 
/sys/system/devices/cpus/cpuXX entries are not available for not-present-but-possible cpus; thus 
userspace kexec utility can not write out the elfcorehdr with all possible cpus listed.

> 
> But I am wondering why the existing present cpu way is going to be
> discarded. Sorry, I tried to go through this thread, it's too long, can
> anyone summarize the reason with shorter and clear sentences. Sorry
> again for that.

By utilizing for_each_possible_cpu() in crash_prepare_elf64_headers(), in the case of the 
kexec_file_load(), this change would simplify some issues Sourabh has encountered for PPC support. 
It would also enable an optimization that permits NOT re-generating the elfcorehdr on cpu changes, 
as all the [possible] cpus are already described in the elfcorehdr.

I've pointed out that this change would have kexec_load (as kexec-tools can only write out, 
initially, the present_cpus()) initially deviate from kexec_file_load (which would now write out the 
possible_cpus()). This deviation would disappear after the first hotplug event (due to calling 
crash_prepare_elf64_headers()). Or I've provided a simple way for kexec_load to rewrite its 
elfcorehdr upon initial load (by calling into the crash hotplug handler).

Can you think of any side effects of going to for_each_possible_cpu()?

Thanks,
eric


> 
>>
>> In case sysconf(_SC_NPROCESSORS_CONF) is not consistent then we can go with:
>> /sys/devices/system/cpu/possible for kexec_load case.
>>
>> Thoughts?
>>
>> - Sourabh Jain
>>
>
  
Eric DeVolder Feb. 28, 2023, 9:50 p.m. UTC | #20
On 2/27/23 00:11, Sourabh Jain wrote:
> 
> On 25/02/23 01:46, Eric DeVolder wrote:
>>
>>
>> On 2/24/23 02:34, Sourabh Jain wrote:
>>>
>>> On 24/02/23 02:04, Eric DeVolder wrote:
>>>>
>>>>
>>>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>>>
>>>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>>>
>>>>>>
>>>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>>>> Hello Eric,
>>>>>>>
>>>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>>>> Eric!
>>>>>>>>>
>>>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>>>
>>>>>>>>>> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>>>>>>>>>>
>>>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
>>>>>>>>>> attempts at locating this state failed when inside the STARTING section, so I located
>>>>>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
>>>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>>>
>>>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
>>>>>>>>>> placed it at the end of the PREPARE section. This crash hotplug handler is also
>>>>>>>>>> registered on this state as the callback for the .teardown method.
>>>>>>>>>
>>>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>>>
>>>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>>>> {
>>>>>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>>>
>>>>>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> and use this to query the actual state at crash time. That spares all
>>>>>>>>> those callback heuristics.
>>>>>>>>>
>>>>>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>>>>>>>>> makedumpfile and (the consumer of it all) the userspace crash utility,
>>>>>>>>>> in order to understand the impact of moving from for_each_present_cpu()
>>>>>>>>>> to for_each_online_cpu().
>>>>>>>>>
>>>>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>>          tglx
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Thomas,
>>>>>>>> I've investigated the passing of crash notes through the vmcore. What I've learned is that:
>>>>>>>>
>>>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its job) does
>>>>>>>>   not care what the contents of cpu PT_NOTES are, but it does coalesce them together.
>>>>>>>>
>>>>>>>> - makedumpfile will count the number of cpu PT_NOTES in order to determine its
>>>>>>>>   nr_cpus variable, which is reported in a header, but otherwise unused (except
>>>>>>>>   for sadump method).
>>>>>>>>
>>>>>>>> - the crash utility, for the purposes of determining the cpus, does not appear to
>>>>>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from that, and also of
>>>>>>>>   course which are online. In addition, when crash does reference the cpu PT_NOTE,
>>>>>>>>   to get its prstatus, it does so by using a percpu technique directly in the vmcore
>>>>>>>>   image memory, not via the ELF structure. Said differently, it appears to me that
>>>>>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it obtains them
>>>>>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>>>>>
>>>>>>>> With this understanding, I did some testing. Perhaps the most telling test was that I
>>>>>>>> changed the number of cpu PT_NOTEs emitted in the crash_prepare_elf64_headers() to just 1,
>>>>>>>> hot plugged some cpus, then also took a few offline sparsely via chcpu, then generated a
>>>>>>>> vmcore. The crash utility had no problem loading the vmcore, it reported the proper number
>>>>>>>> of cpus and the number offline (despite only one cpu PT_NOTE), and changing to a different
>>>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>>>
>>>>>>>> My take away is that crash utility does not rely upon ELF cpu PT_NOTEs, it obtains the
>>>>>>>> cpu information directly from kernel data structures. Perhaps at one time crash relied
>>>>>>>> upon the ELF information, but no more. (Perhaps there are other crash dump analyzers
>>>>>>>> that might rely on the ELF info?)
>>>>>>>>
>>>>>>>> So, all this to say that I see no need to change crash_prepare_elf64_headers(). There
>>>>>>>> is no compelling reason to move away from for_each_present_cpu(), or modify the list for
>>>>>>>> online/offline.
>>>>>>>>
>>>>>>>> Which then leaves the topic of the cpuhp state on which to register. Perhaps reverting
>>>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There does not appear to
>>>>>>>> be a compelling need to accurately track whether the cpu went online/offline for the
>>>>>>>> purposes of creating the elfcorehdr, as ultimately the crash utility pulls that from
>>>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>>>
>>>>>>>> I think this is what Sourabh has known and has been advocating for an optimization
>>>>>>>> path that allows not regenerating the elfcorehdr on cpu changes (because all the percpu
>>>>>>>> structs are all laid out). I do think it best to leave that as an arch choice.
>>>>>>>
>>>>>>> Since things are clear on how the PT_NOTES are consumed in kdump kernel [fs/proc/vmcore.c],
>>>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>>>
>>>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>>>> If yes, can you please list the elfcorehdr components that changes due to CPU hotplug.
>>>>>> Due to the use of for_each_present_cpu(), it is possible for the number of cpu PT_NOTEs
>>>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does not impact the
>>>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>>>
>>>>>>>
>>>>>>>  From what I understood, crash notes are prepared for possible CPUs as system boots and
>>>>>>> could be used to create a PT_NOTE section for each possible CPU while generating the elfcorehdr
>>>>>>> during the kdump kernel load.
>>>>>>>
>>>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible CPU there is no need to
>>>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>>>
>>>>>> For onlining/offlining of cpus, there is no need to regenerate the elfcorehdr. However,
>>>>>> for actual hot un/plug of cpus, the answer is yes due to for_each_present_cpu(). The
>>>>>> caveat here of course is that if crash utility is the only coredump analyzer of concern,
>>>>>> then it doesn't care about these cpu PT_NOTEs and there would be no need to re-generate them.
>>>>>>
>>>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming into mainstream, impacts
>>>>>> any of this.
>>>>>>
>>>>>> Perhaps the one item that might help here is to distinguish between actual hot un/plug of
>>>>>> cpus, versus onlining/offlining. At the moment, I can not distinguish between a hot plug
>>>>>> event and an online event (and unplug/offline). If those were distinguishable, then we
>>>>>> could only regenerate on un/plug events.
>>>>>>
>>>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>>>
>>>>> Yes, because once elfcorehdr is built with possible CPUs we don't have to worry about
>>>>> hot[un]plug case.
>>>>>
>>>>> Here is my view on how things should be handled if a core-dump analyzer is dependent on
>>>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>>>
>>>>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash notes (kernel has
>>>>> one crash note per CPU for every possible CPU). Though the crash notes are allocated
>>>>> during the boot time they are populated when the system is on the crash path.
>>>>>
>>>>> This is how crash notes are populated on PowerPC and I am expecting it would be something
>>>>> similar on other architectures too.
>>>>>
>>>>> The crashing CPU sends IPI to every other online CPU with a callback function that updates the
>>>>> crash notes of that specific CPU. Once the IPI completes the crashing CPU updates its own crash
>>>>> note and proceeds further.
>>>>>
>>>>> The crash notes of CPUs remain uninitialized if the CPUs were offline or hot unplugged at the time
>>>>> system crash. The core-dump analyzer should be able to identify [un]/initialized crash notes
>>>>> and display the information accordingly.
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> - Sourabh
>>>>
>>>> I've been examining what it would mean to move to for_each_possible_cpu() in 
>>>> crash_prepare_elf64_headers(). I think it means:
>>>>
>>>> - Changing for_each_present_cpu() to for_each_possible_cpu() in crash_prepare_elf64_headers().
>>>> - For kexec_load() syscall path, rewrite the incoming/supplied elfcorehdr immediately on the 
>>>> load with the elfcorehdr generated by crash_prepare_elf64_headers().
>>>> - Eliminate/remove the cpuhp machinery for handling crash hotplug events.
>>>
>>> If for_each_present_cpu is replaced with for_each_possible_cpu I still need cpuhp machinery
>>> to update FDT kexec segment for CPU hot add case.
>>
>> Ah, ok, that's important! So the cpuhp callbacks are still needed.
>>>
>>>
>>>>
>>>> This would then setup PT_NOTEs for all possible cpus, which should in theory accommodate crash 
>>>> analyzers that rely on ELF PT_NOTEs for crash_notes.
>>>>
>>>> If staying with for_each_present_cpu() is ultimately decided, then I think leaving the cpuhp 
>>>> machinery in place and each arch could decide how to handle crash cpu hotplug events. The 
>>>> overhead for doing this is very minimal, and the events are likely very infrequent.
>>>
>>> I agree. Some architectures may need cpuhp machinery to update kexec segment[s] other then 
>>> elfcorehdr. For example FDT on PowerPC.
>>>
>>> - Sourabh Jain
>>
>> OK, I was thinking that the desire was to eliminate the cpuhp callbacks. In reality, the desire is 
>> to change to for_each_possible_cpu(). Given that the kernel creates crash_notes for all possible 
>> cpus upon kernel boot, there seems to be no reason to not do this?
>>
>> HOWEVER...
>>
>> It's not clear to me that this particular change needs to be part of this series. It's inclusion 
>> would facilitate PPC support, but doesn't "solve" anything in general. In fact it causes 
>> kexec_load and kexec_file_load to deviate (kexec_load via userspace kexec does the equivalent of 
>> for_each_present_cpu() where as with this change kexec_file_load would do for_each_possible_cpu(); 
>> until a hot plug event then both would do for_each_possible_cpu()). And if this change were to 
>> arrive as part of Sourabh's PPC support, then it does not appear to impact x86 (not sure about 
>> other arches). And the 'crash' dump analyzer doesn't care either way.
>>
>> Including this change would enable an optimization path (for x86 at least) that short-circuits cpu 
>> hotplug changes in the arch crash handler, for example:
>>
>> diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
>> index aca3f1817674..0883f6b11de4 100644
>> --- a/arch/x86/kernel/crash.c
>> +++ b/arch/x86/kernel/crash.c
>> @@ -473,6 +473,11 @@ void arch_crash_handle_hotplug_event(struct kimage *image)
>>     unsigned long mem, memsz;
>>     unsigned long elfsz = 0;
>>
>> +   if (image->file_mode && (
>> +       image->hp_action == KEXEC_CRASH_HP_ADD_CPU ||
>> +       image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU))
>> +       return;
>> +
>>     /*
>>      * Create the new elfcorehdr reflecting the changes to CPU and/or
>>      * memory resources.
>>
>> I'm not sure that is compelling given the infrequent nature of cpu hotplug events.
> It certainly closes/reduces the window where kdump is not active due kexec segment update.|

Fair enough. I plan to include this change in v19.

> 
>>
>> In my mind I still have a question about kexec_load() path. The userspace kexec can not do the 
>> equivalent of for_each_possible_cpu(). It can obtain max possible cpus from 
>> /sys/devices/system/cpu/possible, but for those cpus not present the /sys/devices/system/cpu/cpuXX 
>> is not available and so the crash_notes entries is not available. My attempts to expose all cpuXX 
>> lead to odd behavior that was requiring changes in ACPI and arch code that looked untenable.
>>
>> There seem to be these options available for kexec_load() path:
>> - immediately rewrite the elfcorehdr upon load via a call to crash_prepare_elf64_headers(). I've 
>> made this work with the following, as proof of concept:
> Yes regenerating/patching the elfcorehdr could be an option for kexec_load syscall.
So this is not needed by x86, but more so by ppc. Should this change be in the ppc set or this set?


> 
>>
>> diff --git a/kernel/kexec.c b/kernel/kexec.c
>> index cb8e6e6f983c..4eb201270f97 100644
>> --- a/kernel/kexec.c
>> +++ b/kernel/kexec.c
>> @@ -163,6 +163,12 @@ static int do_kexec_load(unsigned long entry, unsigned long
>>     kimage_free(image);
>>  out_unlock:
>>     kexec_unlock();
>> +   if (IS_ENABLED(CONFIG_CRASH_HOTPLUG)) {
>> +       if ((flags & KEXEC_ON_CRASH) && kexec_crash_image) {
>> +           crash_handle_hotplug_event(KEXEC_CRASH_HP_NONE, KEXEC_CRASH_HP_INVALID_CPU);
>> +       }
>> +   }
>>     return ret;
>>  }
>>
>> - Another option is spend the time to determine whether exposing all cpuXX is a viable solution; I 
>> have no idea what impacts to userspace would be for possible-but-not-yet-present cpuXX entries 
>> would be. It might also mean requiring a 'present' entry available within the cpuXX.
>>
>> - Another option is to simply let the hot plug events rewrite the elfcorehdr on demand. This is 
>> what I've originally put forth, but not sure how this impacts PPC given for_each_possible_cpu() 
>> change.
> Given that /sys/devices/system/cpu/cpuXX is not present for possbile-but-not-yet-present CPUs, I am 
> wondering do we even have crash notes for possible CPUs on x86?
Yes there are crash_notes for all possible cpus on x86.
eric

>>
>> The concern is that today, both kexec_load and kexec_file_load mirror each other with respect to 
>> for_each_present_cpu(); that is userspace kexec is able to generate the elfcorehdr the same as 
>> would kexec_file_load, for cpus. But by changing to for_each_possible_cpu(), the two would deviate.
> 
> Thanks,
> Sourabh Jain
  
Sourabh Jain March 1, 2023, 6:22 a.m. UTC | #21
On 01/03/23 03:20, Eric DeVolder wrote:
>
>
> On 2/27/23 00:11, Sourabh Jain wrote:
>>
>> On 25/02/23 01:46, Eric DeVolder wrote:
>>>
>>>
>>> On 2/24/23 02:34, Sourabh Jain wrote:
>>>>
>>>> On 24/02/23 02:04, Eric DeVolder wrote:
>>>>>
>>>>>
>>>>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>>>>
>>>>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>>>>> Hello Eric,
>>>>>>>>
>>>>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>>>>> Eric!
>>>>>>>>>>
>>>>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>>>>
>>>>>>>>>>> So my latest solution is introduce two new CPUHP states, 
>>>>>>>>>>> CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. 
>>>>>>>>>>> I'm open to better names.
>>>>>>>>>>>
>>>>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after 
>>>>>>>>>>> CPUHP_BRINGUP_CPU. My
>>>>>>>>>>> attempts at locating this state failed when inside the 
>>>>>>>>>>> STARTING section, so I located
>>>>>>>>>>> this just inside the ONLINE sectoin. The crash hotplug 
>>>>>>>>>>> handler is registered on
>>>>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>>>>
>>>>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before 
>>>>>>>>>>> CPUHP_TEARDOWN_CPU, and I
>>>>>>>>>>> placed it at the end of the PREPARE section. This crash 
>>>>>>>>>>> hotplug handler is also
>>>>>>>>>>> registered on this state as the callback for the .teardown 
>>>>>>>>>>> method.
>>>>>>>>>>
>>>>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>>>>
>>>>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>>>>> {
>>>>>>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>>>>
>>>>>>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> and use this to query the actual state at crash time. That 
>>>>>>>>>> spares all
>>>>>>>>>> those callback heuristics.
>>>>>>>>>>
>>>>>>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, 
>>>>>>>>>>> vmcoreinfo,
>>>>>>>>>>> makedumpfile and (the consumer of it all) the userspace 
>>>>>>>>>>> crash utility,
>>>>>>>>>>> in order to understand the impact of moving from 
>>>>>>>>>>> for_each_present_cpu()
>>>>>>>>>>> to for_each_online_cpu().
>>>>>>>>>>
>>>>>>>>>> Is the packing actually worth the trouble? What's the actual 
>>>>>>>>>> win?
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>>          tglx
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thomas,
>>>>>>>>> I've investigated the passing of crash notes through the 
>>>>>>>>> vmcore. What I've learned is that:
>>>>>>>>>
>>>>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do 
>>>>>>>>> its job) does
>>>>>>>>>   not care what the contents of cpu PT_NOTES are, but it does 
>>>>>>>>> coalesce them together.
>>>>>>>>>
>>>>>>>>> - makedumpfile will count the number of cpu PT_NOTES in order 
>>>>>>>>> to determine its
>>>>>>>>>   nr_cpus variable, which is reported in a header, but 
>>>>>>>>> otherwise unused (except
>>>>>>>>>   for sadump method).
>>>>>>>>>
>>>>>>>>> - the crash utility, for the purposes of determining the cpus, 
>>>>>>>>> does not appear to
>>>>>>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the 
>>>>>>>>> various
>>>>>>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from 
>>>>>>>>> that, and also of
>>>>>>>>>   course which are online. In addition, when crash does 
>>>>>>>>> reference the cpu PT_NOTE,
>>>>>>>>>   to get its prstatus, it does so by using a percpu technique 
>>>>>>>>> directly in the vmcore
>>>>>>>>>   image memory, not via the ELF structure. Said differently, 
>>>>>>>>> it appears to me that
>>>>>>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; 
>>>>>>>>> rather it obtains them
>>>>>>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>>>>>>
>>>>>>>>> With this understanding, I did some testing. Perhaps the most 
>>>>>>>>> telling test was that I
>>>>>>>>> changed the number of cpu PT_NOTEs emitted in the 
>>>>>>>>> crash_prepare_elf64_headers() to just 1,
>>>>>>>>> hot plugged some cpus, then also took a few offline sparsely 
>>>>>>>>> via chcpu, then generated a
>>>>>>>>> vmcore. The crash utility had no problem loading the vmcore, 
>>>>>>>>> it reported the proper number
>>>>>>>>> of cpus and the number offline (despite only one cpu PT_NOTE), 
>>>>>>>>> and changing to a different
>>>>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>>>>
>>>>>>>>> My take away is that crash utility does not rely upon ELF cpu 
>>>>>>>>> PT_NOTEs, it obtains the
>>>>>>>>> cpu information directly from kernel data structures. Perhaps 
>>>>>>>>> at one time crash relied
>>>>>>>>> upon the ELF information, but no more. (Perhaps there are 
>>>>>>>>> other crash dump analyzers
>>>>>>>>> that might rely on the ELF info?)
>>>>>>>>>
>>>>>>>>> So, all this to say that I see no need to change 
>>>>>>>>> crash_prepare_elf64_headers(). There
>>>>>>>>> is no compelling reason to move away from 
>>>>>>>>> for_each_present_cpu(), or modify the list for
>>>>>>>>> online/offline.
>>>>>>>>>
>>>>>>>>> Which then leaves the topic of the cpuhp state on which to 
>>>>>>>>> register. Perhaps reverting
>>>>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. 
>>>>>>>>> There does not appear to
>>>>>>>>> be a compelling need to accurately track whether the cpu went 
>>>>>>>>> online/offline for the
>>>>>>>>> purposes of creating the elfcorehdr, as ultimately the crash 
>>>>>>>>> utility pulls that from
>>>>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>>>>
>>>>>>>>> I think this is what Sourabh has known and has been advocating 
>>>>>>>>> for an optimization
>>>>>>>>> path that allows not regenerating the elfcorehdr on cpu 
>>>>>>>>> changes (because all the percpu
>>>>>>>>> structs are all laid out). I do think it best to leave that as 
>>>>>>>>> an arch choice.
>>>>>>>>
>>>>>>>> Since things are clear on how the PT_NOTES are consumed in 
>>>>>>>> kdump kernel [fs/proc/vmcore.c],
>>>>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>>>>
>>>>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>>>>> If yes, can you please list the elfcorehdr components that 
>>>>>>>> changes due to CPU hotplug.
>>>>>>> Due to the use of for_each_present_cpu(), it is possible for the 
>>>>>>> number of cpu PT_NOTEs
>>>>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus 
>>>>>>> does not impact the
>>>>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>>>>
>>>>>>>>
>>>>>>>>  From what I understood, crash notes are prepared for possible 
>>>>>>>> CPUs as system boots and
>>>>>>>> could be used to create a PT_NOTE section for each possible CPU 
>>>>>>>> while generating the elfcorehdr
>>>>>>>> during the kdump kernel load.
>>>>>>>>
>>>>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every 
>>>>>>>> possible CPU there is no need to
>>>>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>>>>
>>>>>>> For onlining/offlining of cpus, there is no need to regenerate 
>>>>>>> the elfcorehdr. However,
>>>>>>> for actual hot un/plug of cpus, the answer is yes due to 
>>>>>>> for_each_present_cpu(). The
>>>>>>> caveat here of course is that if crash utility is the only 
>>>>>>> coredump analyzer of concern,
>>>>>>> then it doesn't care about these cpu PT_NOTEs and there would be 
>>>>>>> no need to re-generate them.
>>>>>>>
>>>>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming 
>>>>>>> into mainstream, impacts
>>>>>>> any of this.
>>>>>>>
>>>>>>> Perhaps the one item that might help here is to distinguish 
>>>>>>> between actual hot un/plug of
>>>>>>> cpus, versus onlining/offlining. At the moment, I can not 
>>>>>>> distinguish between a hot plug
>>>>>>> event and an online event (and unplug/offline). If those were 
>>>>>>> distinguishable, then we
>>>>>>> could only regenerate on un/plug events.
>>>>>>>
>>>>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>>>>
>>>>>> Yes, because once elfcorehdr is built with possible CPUs we don't 
>>>>>> have to worry about
>>>>>> hot[un]plug case.
>>>>>>
>>>>>> Here is my view on how things should be handled if a core-dump 
>>>>>> analyzer is dependent on
>>>>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>>>>
>>>>>> A PT_NOTE in elfcorehdr holds the address of the corresponding 
>>>>>> crash notes (kernel has
>>>>>> one crash note per CPU for every possible CPU). Though the crash 
>>>>>> notes are allocated
>>>>>> during the boot time they are populated when the system is on the 
>>>>>> crash path.
>>>>>>
>>>>>> This is how crash notes are populated on PowerPC and I am 
>>>>>> expecting it would be something
>>>>>> similar on other architectures too.
>>>>>>
>>>>>> The crashing CPU sends IPI to every other online CPU with a 
>>>>>> callback function that updates the
>>>>>> crash notes of that specific CPU. Once the IPI completes the 
>>>>>> crashing CPU updates its own crash
>>>>>> note and proceeds further.
>>>>>>
>>>>>> The crash notes of CPUs remain uninitialized if the CPUs were 
>>>>>> offline or hot unplugged at the time
>>>>>> system crash. The core-dump analyzer should be able to identify 
>>>>>> [un]/initialized crash notes
>>>>>> and display the information accordingly.
>>>>>>
>>>>>> Thoughts?
>>>>>>
>>>>>> - Sourabh
>>>>>
>>>>> I've been examining what it would mean to move to 
>>>>> for_each_possible_cpu() in crash_prepare_elf64_headers(). I think 
>>>>> it means:
>>>>>
>>>>> - Changing for_each_present_cpu() to for_each_possible_cpu() in 
>>>>> crash_prepare_elf64_headers().
>>>>> - For kexec_load() syscall path, rewrite the incoming/supplied 
>>>>> elfcorehdr immediately on the load with the elfcorehdr generated 
>>>>> by crash_prepare_elf64_headers().
>>>>> - Eliminate/remove the cpuhp machinery for handling crash hotplug 
>>>>> events.
>>>>
>>>> If for_each_present_cpu is replaced with for_each_possible_cpu I 
>>>> still need cpuhp machinery
>>>> to update FDT kexec segment for CPU hot add case.
>>>
>>> Ah, ok, that's important! So the cpuhp callbacks are still needed.
>>>>
>>>>
>>>>>
>>>>> This would then setup PT_NOTEs for all possible cpus, which should 
>>>>> in theory accommodate crash analyzers that rely on ELF PT_NOTEs 
>>>>> for crash_notes.
>>>>>
>>>>> If staying with for_each_present_cpu() is ultimately decided, then 
>>>>> I think leaving the cpuhp machinery in place and each arch could 
>>>>> decide how to handle crash cpu hotplug events. The overhead for 
>>>>> doing this is very minimal, and the events are likely very 
>>>>> infrequent.
>>>>
>>>> I agree. Some architectures may need cpuhp machinery to update 
>>>> kexec segment[s] other then elfcorehdr. For example FDT on PowerPC.
>>>>
>>>> - Sourabh Jain
>>>
>>> OK, I was thinking that the desire was to eliminate the cpuhp 
>>> callbacks. In reality, the desire is to change to 
>>> for_each_possible_cpu(). Given that the kernel creates crash_notes 
>>> for all possible cpus upon kernel boot, there seems to be no reason 
>>> to not do this?
>>>
>>> HOWEVER...
>>>
>>> It's not clear to me that this particular change needs to be part of 
>>> this series. It's inclusion would facilitate PPC support, but 
>>> doesn't "solve" anything in general. In fact it causes kexec_load 
>>> and kexec_file_load to deviate (kexec_load via userspace kexec does 
>>> the equivalent of for_each_present_cpu() where as with this change 
>>> kexec_file_load would do for_each_possible_cpu(); until a hot plug 
>>> event then both would do for_each_possible_cpu()). And if this 
>>> change were to arrive as part of Sourabh's PPC support, then it does 
>>> not appear to impact x86 (not sure about other arches). And the 
>>> 'crash' dump analyzer doesn't care either way.
>>>
>>> Including this change would enable an optimization path (for x86 at 
>>> least) that short-circuits cpu hotplug changes in the arch crash 
>>> handler, for example:
>>>
>>> diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
>>> index aca3f1817674..0883f6b11de4 100644
>>> --- a/arch/x86/kernel/crash.c
>>> +++ b/arch/x86/kernel/crash.c
>>> @@ -473,6 +473,11 @@ void arch_crash_handle_hotplug_event(struct 
>>> kimage *image)
>>>     unsigned long mem, memsz;
>>>     unsigned long elfsz = 0;
>>>
>>> +   if (image->file_mode && (
>>> +       image->hp_action == KEXEC_CRASH_HP_ADD_CPU ||
>>> +       image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU))
>>> +       return;
>>> +
>>>     /*
>>>      * Create the new elfcorehdr reflecting the changes to CPU and/or
>>>      * memory resources.
>>>
>>> I'm not sure that is compelling given the infrequent nature of cpu 
>>> hotplug events.
>> It certainly closes/reduces the window where kdump is not active due 
>> kexec segment update.|
>
> Fair enough. I plan to include this change in v19.
>
>>
>>>
>>> In my mind I still have a question about kexec_load() path. The 
>>> userspace kexec can not do the equivalent of 
>>> for_each_possible_cpu(). It can obtain max possible cpus from 
>>> /sys/devices/system/cpu/possible, but for those cpus not present the 
>>> /sys/devices/system/cpu/cpuXX is not available and so the 
>>> crash_notes entries is not available. My attempts to expose all 
>>> cpuXX lead to odd behavior that was requiring changes in ACPI and 
>>> arch code that looked untenable.
>>>
>>> There seem to be these options available for kexec_load() path:
>>> - immediately rewrite the elfcorehdr upon load via a call to 
>>> crash_prepare_elf64_headers(). I've made this work with the 
>>> following, as proof of concept:
>> Yes regenerating/patching the elfcorehdr could be an option for 
>> kexec_load syscall.
> So this is not needed by x86, but more so by ppc. Should this change 
> be in the ppc set or this set?
Since /sys/devices/system/cpu/cpuXX represents possible CPUs on PowerPC, 
there is no need for elfcorehdr regeneration on PowerPC for kexec_load case
for CPU hotplug events.

My ask is, keep the cpuhp machinery so that architectures can update 
other kexec segments if needed of CPU add/remove case.

In case x86 has nothing to update on CPU hotplug events and you want 
remove the CPU hp machinery I can add the same
in ppc patch series.

Thanks,
Sourabh Jain
  
Eric DeVolder March 1, 2023, 2:16 p.m. UTC | #22
On 3/1/23 00:22, Sourabh Jain wrote:
> 
> On 01/03/23 03:20, Eric DeVolder wrote:
>>
>>
>> On 2/27/23 00:11, Sourabh Jain wrote:
>>>
>>> On 25/02/23 01:46, Eric DeVolder wrote:
>>>>
>>>>
>>>> On 2/24/23 02:34, Sourabh Jain wrote:
>>>>>
>>>>> On 24/02/23 02:04, Eric DeVolder wrote:
>>>>>>
>>>>>>
>>>>>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>>>>>
>>>>>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>>>>>> Hello Eric,
>>>>>>>>>
>>>>>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>>>>>> Eric!
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> So my latest solution is introduce two new CPUHP states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for offlining. I'm open to better names.
>>>>>>>>>>>>
>>>>>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be placed after CPUHP_BRINGUP_CPU. My
>>>>>>>>>>>> attempts at locating this state failed when inside the STARTING section, so I located
>>>>>>>>>>>> this just inside the ONLINE sectoin. The crash hotplug handler is registered on
>>>>>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>>>>>
>>>>>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be placed before CPUHP_TEARDOWN_CPU, and I
>>>>>>>>>>>> placed it at the end of the PREPARE section. This crash hotplug handler is also
>>>>>>>>>>>> registered on this state as the callback for the .teardown method.
>>>>>>>>>>>
>>>>>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>>>>>
>>>>>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>>>>>> {
>>>>>>>>>>>     struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>>>>>
>>>>>>>>>>>     return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> and use this to query the actual state at crash time. That spares all
>>>>>>>>>>> those callback heuristics.
>>>>>>>>>>>
>>>>>>>>>>>> I'm making my way though percpu crash_notes, elfcorehdr, vmcoreinfo,
>>>>>>>>>>>> makedumpfile and (the consumer of it all) the userspace crash utility,
>>>>>>>>>>>> in order to understand the impact of moving from for_each_present_cpu()
>>>>>>>>>>>> to for_each_online_cpu().
>>>>>>>>>>>
>>>>>>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>>
>>>>>>>>>>>          tglx
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thomas,
>>>>>>>>>> I've investigated the passing of crash notes through the vmcore. What I've learned is that:
>>>>>>>>>>
>>>>>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references to do its job) does
>>>>>>>>>>   not care what the contents of cpu PT_NOTES are, but it does coalesce them together.
>>>>>>>>>>
>>>>>>>>>> - makedumpfile will count the number of cpu PT_NOTES in order to determine its
>>>>>>>>>>   nr_cpus variable, which is reported in a header, but otherwise unused (except
>>>>>>>>>>   for sadump method).
>>>>>>>>>>
>>>>>>>>>> - the crash utility, for the purposes of determining the cpus, does not appear to
>>>>>>>>>>   reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>>>>>>   cpu_[possible|present|online]_mask and computes nr_cpus from that, and also of
>>>>>>>>>>   course which are online. In addition, when crash does reference the cpu PT_NOTE,
>>>>>>>>>>   to get its prstatus, it does so by using a percpu technique directly in the vmcore
>>>>>>>>>>   image memory, not via the ELF structure. Said differently, it appears to me that
>>>>>>>>>>   crash utility doesn't rely on the ELF PT_NOTEs for cpus; rather it obtains them
>>>>>>>>>>   via kernel cpumasks and the memory within the vmcore.
>>>>>>>>>>
>>>>>>>>>> With this understanding, I did some testing. Perhaps the most telling test was that I
>>>>>>>>>> changed the number of cpu PT_NOTEs emitted in the crash_prepare_elf64_headers() to just 1,
>>>>>>>>>> hot plugged some cpus, then also took a few offline sparsely via chcpu, then generated a
>>>>>>>>>> vmcore. The crash utility had no problem loading the vmcore, it reported the proper number
>>>>>>>>>> of cpus and the number offline (despite only one cpu PT_NOTE), and changing to a different
>>>>>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>>>>>
>>>>>>>>>> My take away is that crash utility does not rely upon ELF cpu PT_NOTEs, it obtains the
>>>>>>>>>> cpu information directly from kernel data structures. Perhaps at one time crash relied
>>>>>>>>>> upon the ELF information, but no more. (Perhaps there are other crash dump analyzers
>>>>>>>>>> that might rely on the ELF info?)
>>>>>>>>>>
>>>>>>>>>> So, all this to say that I see no need to change crash_prepare_elf64_headers(). There
>>>>>>>>>> is no compelling reason to move away from for_each_present_cpu(), or modify the list for
>>>>>>>>>> online/offline.
>>>>>>>>>>
>>>>>>>>>> Which then leaves the topic of the cpuhp state on which to register. Perhaps reverting
>>>>>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right answer. There does not appear to
>>>>>>>>>> be a compelling need to accurately track whether the cpu went online/offline for the
>>>>>>>>>> purposes of creating the elfcorehdr, as ultimately the crash utility pulls that from
>>>>>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>>>>>
>>>>>>>>>> I think this is what Sourabh has known and has been advocating for an optimization
>>>>>>>>>> path that allows not regenerating the elfcorehdr on cpu changes (because all the percpu
>>>>>>>>>> structs are all laid out). I do think it best to leave that as an arch choice.
>>>>>>>>>
>>>>>>>>> Since things are clear on how the PT_NOTES are consumed in kdump kernel [fs/proc/vmcore.c],
>>>>>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>>>>>
>>>>>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>>>>>> If yes, can you please list the elfcorehdr components that changes due to CPU hotplug.
>>>>>>>> Due to the use of for_each_present_cpu(), it is possible for the number of cpu PT_NOTEs
>>>>>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus does not impact the
>>>>>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>>>>>
>>>>>>>>>
>>>>>>>>>  From what I understood, crash notes are prepared for possible CPUs as system boots and
>>>>>>>>> could be used to create a PT_NOTE section for each possible CPU while generating the 
>>>>>>>>> elfcorehdr
>>>>>>>>> during the kdump kernel load.
>>>>>>>>>
>>>>>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every possible CPU there is no need to
>>>>>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>>>>>
>>>>>>>> For onlining/offlining of cpus, there is no need to regenerate the elfcorehdr. However,
>>>>>>>> for actual hot un/plug of cpus, the answer is yes due to for_each_present_cpu(). The
>>>>>>>> caveat here of course is that if crash utility is the only coredump analyzer of concern,
>>>>>>>> then it doesn't care about these cpu PT_NOTEs and there would be no need to re-generate them.
>>>>>>>>
>>>>>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming into mainstream, impacts
>>>>>>>> any of this.
>>>>>>>>
>>>>>>>> Perhaps the one item that might help here is to distinguish between actual hot un/plug of
>>>>>>>> cpus, versus onlining/offlining. At the moment, I can not distinguish between a hot plug
>>>>>>>> event and an online event (and unplug/offline). If those were distinguishable, then we
>>>>>>>> could only regenerate on un/plug events.
>>>>>>>>
>>>>>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>>>>>
>>>>>>> Yes, because once elfcorehdr is built with possible CPUs we don't have to worry about
>>>>>>> hot[un]plug case.
>>>>>>>
>>>>>>> Here is my view on how things should be handled if a core-dump analyzer is dependent on
>>>>>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>>>>>
>>>>>>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash notes (kernel has
>>>>>>> one crash note per CPU for every possible CPU). Though the crash notes are allocated
>>>>>>> during the boot time they are populated when the system is on the crash path.
>>>>>>>
>>>>>>> This is how crash notes are populated on PowerPC and I am expecting it would be something
>>>>>>> similar on other architectures too.
>>>>>>>
>>>>>>> The crashing CPU sends IPI to every other online CPU with a callback function that updates the
>>>>>>> crash notes of that specific CPU. Once the IPI completes the crashing CPU updates its own crash
>>>>>>> note and proceeds further.
>>>>>>>
>>>>>>> The crash notes of CPUs remain uninitialized if the CPUs were offline or hot unplugged at the 
>>>>>>> time
>>>>>>> system crash. The core-dump analyzer should be able to identify [un]/initialized crash notes
>>>>>>> and display the information accordingly.
>>>>>>>
>>>>>>> Thoughts?
>>>>>>>
>>>>>>> - Sourabh
>>>>>>
>>>>>> I've been examining what it would mean to move to for_each_possible_cpu() in 
>>>>>> crash_prepare_elf64_headers(). I think it means:
>>>>>>
>>>>>> - Changing for_each_present_cpu() to for_each_possible_cpu() in crash_prepare_elf64_headers().
>>>>>> - For kexec_load() syscall path, rewrite the incoming/supplied elfcorehdr immediately on the 
>>>>>> load with the elfcorehdr generated by crash_prepare_elf64_headers().
>>>>>> - Eliminate/remove the cpuhp machinery for handling crash hotplug events.
>>>>>
>>>>> If for_each_present_cpu is replaced with for_each_possible_cpu I still need cpuhp machinery
>>>>> to update FDT kexec segment for CPU hot add case.
>>>>
>>>> Ah, ok, that's important! So the cpuhp callbacks are still needed.
>>>>>
>>>>>
>>>>>>
>>>>>> This would then setup PT_NOTEs for all possible cpus, which should in theory accommodate crash 
>>>>>> analyzers that rely on ELF PT_NOTEs for crash_notes.
>>>>>>
>>>>>> If staying with for_each_present_cpu() is ultimately decided, then I think leaving the cpuhp 
>>>>>> machinery in place and each arch could decide how to handle crash cpu hotplug events. The 
>>>>>> overhead for doing this is very minimal, and the events are likely very infrequent.
>>>>>
>>>>> I agree. Some architectures may need cpuhp machinery to update kexec segment[s] other then 
>>>>> elfcorehdr. For example FDT on PowerPC.
>>>>>
>>>>> - Sourabh Jain
>>>>
>>>> OK, I was thinking that the desire was to eliminate the cpuhp callbacks. In reality, the desire 
>>>> is to change to for_each_possible_cpu(). Given that the kernel creates crash_notes for all 
>>>> possible cpus upon kernel boot, there seems to be no reason to not do this?
>>>>
>>>> HOWEVER...
>>>>
>>>> It's not clear to me that this particular change needs to be part of this series. It's inclusion 
>>>> would facilitate PPC support, but doesn't "solve" anything in general. In fact it causes 
>>>> kexec_load and kexec_file_load to deviate (kexec_load via userspace kexec does the equivalent of 
>>>> for_each_present_cpu() where as with this change kexec_file_load would do 
>>>> for_each_possible_cpu(); until a hot plug event then both would do for_each_possible_cpu()). And 
>>>> if this change were to arrive as part of Sourabh's PPC support, then it does not appear to 
>>>> impact x86 (not sure about other arches). And the 'crash' dump analyzer doesn't care either way.
>>>>
>>>> Including this change would enable an optimization path (for x86 at least) that short-circuits 
>>>> cpu hotplug changes in the arch crash handler, for example:
>>>>
>>>> diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
>>>> index aca3f1817674..0883f6b11de4 100644
>>>> --- a/arch/x86/kernel/crash.c
>>>> +++ b/arch/x86/kernel/crash.c
>>>> @@ -473,6 +473,11 @@ void arch_crash_handle_hotplug_event(struct kimage *image)
>>>>     unsigned long mem, memsz;
>>>>     unsigned long elfsz = 0;
>>>>
>>>> +   if (image->file_mode && (
>>>> +       image->hp_action == KEXEC_CRASH_HP_ADD_CPU ||
>>>> +       image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU))
>>>> +       return;
>>>> +
>>>>     /*
>>>>      * Create the new elfcorehdr reflecting the changes to CPU and/or
>>>>      * memory resources.
>>>>
>>>> I'm not sure that is compelling given the infrequent nature of cpu hotplug events.
>>> It certainly closes/reduces the window where kdump is not active due kexec segment update.|
>>
>> Fair enough. I plan to include this change in v19.
>>
>>>
>>>>
>>>> In my mind I still have a question about kexec_load() path. The userspace kexec can not do the 
>>>> equivalent of for_each_possible_cpu(). It can obtain max possible cpus from 
>>>> /sys/devices/system/cpu/possible, but for those cpus not present the 
>>>> /sys/devices/system/cpu/cpuXX is not available and so the crash_notes entries is not available. 
>>>> My attempts to expose all cpuXX lead to odd behavior that was requiring changes in ACPI and arch 
>>>> code that looked untenable.
>>>>
>>>> There seem to be these options available for kexec_load() path:
>>>> - immediately rewrite the elfcorehdr upon load via a call to crash_prepare_elf64_headers(). I've 
>>>> made this work with the following, as proof of concept:
>>> Yes regenerating/patching the elfcorehdr could be an option for kexec_load syscall.
>> So this is not needed by x86, but more so by ppc. Should this change be in the ppc set or this set?
> Since /sys/devices/system/cpu/cpuXX represents possible CPUs on PowerPC, there is no need for 
> elfcorehdr regeneration on PowerPC for kexec_load case
> for CPU hotplug events.
> 
> My ask is, keep the cpuhp machinery so that architectures can update other kexec segments if needed 
> of CPU add/remove case.
> 
> In case x86 has nothing to update on CPU hotplug events and you want remove the CPU hp machinery I 
> can add the same
> in ppc patch series.

I'll keep the cpuhp machinery; in general it is needed for kexec_load usage in particular since we 
are changing crash_prepare_elf64_headers() to for_each_possible_cpu().
eric

> 
> Thanks,
> Sourabh Jain
  
Eric DeVolder March 1, 2023, 3:48 p.m. UTC | #23
On 2/28/23 12:52, Eric DeVolder wrote:
> 
> 
> On 2/28/23 06:44, Baoquan He wrote:
>> On 02/13/23 at 10:10am, Sourabh Jain wrote:
>>>
>>> On 11/02/23 06:05, Eric DeVolder wrote:
>>>>
>>>>
>>>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>>>
>>>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>>>
>>>>>>
>>>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>>>> Hello Eric,
>>>>>>>
>>>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>>>> Eric!
>>>>>>>>>
>>>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>>>
>>>>>>>>>> So my latest solution is introduce two new CPUHP
>>>>>>>>>> states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for
>>>>>>>>>> offlining. I'm open to better names.
>>>>>>>>>>
>>>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be
>>>>>>>>>> placed after CPUHP_BRINGUP_CPU. My
>>>>>>>>>> attempts at locating this state failed when
>>>>>>>>>> inside the STARTING section, so I located
>>>>>>>>>> this just inside the ONLINE sectoin. The crash
>>>>>>>>>> hotplug handler is registered on
>>>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>>>
>>>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be
>>>>>>>>>> placed before CPUHP_TEARDOWN_CPU, and I
>>>>>>>>>> placed it at the end of the PREPARE section.
>>>>>>>>>> This crash hotplug handler is also
>>>>>>>>>> registered on this state as the callback for the .teardown method.
>>>>>>>>>
>>>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>>>
>>>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>>>> {
>>>>>>>>>      struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>>>
>>>>>>>>>      return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> and use this to query the actual state at crash
>>>>>>>>> time. That spares all
>>>>>>>>> those callback heuristics.
>>>>>>>>>
>>>>>>>>>> I'm making my way though percpu crash_notes,
>>>>>>>>>> elfcorehdr, vmcoreinfo,
>>>>>>>>>> makedumpfile and (the consumer of it all) the
>>>>>>>>>> userspace crash utility,
>>>>>>>>>> in order to understand the impact of moving from
>>>>>>>>>> for_each_present_cpu()
>>>>>>>>>> to for_each_online_cpu().
>>>>>>>>>
>>>>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>>           tglx
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Thomas,
>>>>>>>> I've investigated the passing of crash notes through the
>>>>>>>> vmcore. What I've learned is that:
>>>>>>>>
>>>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references
>>>>>>>> to do its job) does
>>>>>>>>    not care what the contents of cpu PT_NOTES are, but it
>>>>>>>> does coalesce them together.
>>>>>>>>
>>>>>>>> - makedumpfile will count the number of cpu PT_NOTES in
>>>>>>>> order to determine its
>>>>>>>>    nr_cpus variable, which is reported in a header, but
>>>>>>>> otherwise unused (except
>>>>>>>>    for sadump method).
>>>>>>>>
>>>>>>>> - the crash utility, for the purposes of determining the
>>>>>>>> cpus, does not appear to
>>>>>>>>    reference the elfcorehdr PT_NOTEs. Instead it locates the various
>>>>>>>>    cpu_[possible|present|online]_mask and computes
>>>>>>>> nr_cpus from that, and also of
>>>>>>>>    course which are online. In addition, when crash does
>>>>>>>> reference the cpu PT_NOTE,
>>>>>>>>    to get its prstatus, it does so by using a percpu
>>>>>>>> technique directly in the vmcore
>>>>>>>>    image memory, not via the ELF structure. Said
>>>>>>>> differently, it appears to me that
>>>>>>>>    crash utility doesn't rely on the ELF PT_NOTEs for
>>>>>>>> cpus; rather it obtains them
>>>>>>>>    via kernel cpumasks and the memory within the vmcore.
>>>>>>>>
>>>>>>>> With this understanding, I did some testing. Perhaps the
>>>>>>>> most telling test was that I
>>>>>>>> changed the number of cpu PT_NOTEs emitted in the
>>>>>>>> crash_prepare_elf64_headers() to just 1,
>>>>>>>> hot plugged some cpus, then also took a few offline
>>>>>>>> sparsely via chcpu, then generated a
>>>>>>>> vmcore. The crash utility had no problem loading the
>>>>>>>> vmcore, it reported the proper number
>>>>>>>> of cpus and the number offline (despite only one cpu
>>>>>>>> PT_NOTE), and changing to a different
>>>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>>>
>>>>>>>> My take away is that crash utility does not rely upon
>>>>>>>> ELF cpu PT_NOTEs, it obtains the
>>>>>>>> cpu information directly from kernel data structures.
>>>>>>>> Perhaps at one time crash relied
>>>>>>>> upon the ELF information, but no more. (Perhaps there
>>>>>>>> are other crash dump analyzers
>>>>>>>> that might rely on the ELF info?)
>>>>>>>>
>>>>>>>> So, all this to say that I see no need to change
>>>>>>>> crash_prepare_elf64_headers(). There
>>>>>>>> is no compelling reason to move away from
>>>>>>>> for_each_present_cpu(), or modify the list for
>>>>>>>> online/offline.
>>>>>>>>
>>>>>>>> Which then leaves the topic of the cpuhp state on which
>>>>>>>> to register. Perhaps reverting
>>>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right
>>>>>>>> answer. There does not appear to
>>>>>>>> be a compelling need to accurately track whether the cpu
>>>>>>>> went online/offline for the
>>>>>>>> purposes of creating the elfcorehdr, as ultimately the
>>>>>>>> crash utility pulls that from
>>>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>>>
>>>>>>>> I think this is what Sourabh has known and has been
>>>>>>>> advocating for an optimization
>>>>>>>> path that allows not regenerating the elfcorehdr on cpu
>>>>>>>> changes (because all the percpu
>>>>>>>> structs are all laid out). I do think it best to leave
>>>>>>>> that as an arch choice.
>>>>>>>
>>>>>>> Since things are clear on how the PT_NOTES are consumed in
>>>>>>> kdump kernel [fs/proc/vmcore.c],
>>>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>>>
>>>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>>>> If yes, can you please list the elfcorehdr components that
>>>>>>> changes due to CPU hotplug.
>>>>>> Due to the use of for_each_present_cpu(), it is possible for the
>>>>>> number of cpu PT_NOTEs
>>>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus
>>>>>> does not impact the
>>>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>>>
>>>>>>>
>>>>>>>   From what I understood, crash notes are prepared for
>>>>>>> possible CPUs as system boots and
>>>>>>> could be used to create a PT_NOTE section for each possible
>>>>>>> CPU while generating the elfcorehdr
>>>>>>> during the kdump kernel load.
>>>>>>>
>>>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every
>>>>>>> possible CPU there is no need to
>>>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>>>
>>>>>> For onlining/offlining of cpus, there is no need to regenerate
>>>>>> the elfcorehdr. However,
>>>>>> for actual hot un/plug of cpus, the answer is yes due to
>>>>>> for_each_present_cpu(). The
>>>>>> caveat here of course is that if crash utility is the only
>>>>>> coredump analyzer of concern,
>>>>>> then it doesn't care about these cpu PT_NOTEs and there would be
>>>>>> no need to re-generate them.
>>>>>>
>>>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming
>>>>>> into mainstream, impacts
>>>>>> any of this.
>>>>>>
>>>>>> Perhaps the one item that might help here is to distinguish
>>>>>> between actual hot un/plug of
>>>>>> cpus, versus onlining/offlining. At the moment, I can not
>>>>>> distinguish between a hot plug
>>>>>> event and an online event (and unplug/offline). If those were
>>>>>> distinguishable, then we
>>>>>> could only regenerate on un/plug events.
>>>>>>
>>>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>>>
>>>>> Yes, because once elfcorehdr is built with possible CPUs we don't
>>>>> have to worry about
>>>>> hot[un]plug case.
>>>>>
>>>>> Here is my view on how things should be handled if a core-dump
>>>>> analyzer is dependent on
>>>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>>>
>>>>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash
>>>>> notes (kernel has
>>>>> one crash note per CPU for every possible CPU). Though the crash
>>>>> notes are allocated
>>>>> during the boot time they are populated when the system is on the
>>>>> crash path.
>>>>>
>>>>> This is how crash notes are populated on PowerPC and I am expecting
>>>>> it would be something
>>>>> similar on other architectures too.
>>>>>
>>>>> The crashing CPU sends IPI to every other online CPU with a callback
>>>>> function that updates the
>>>>> crash notes of that specific CPU. Once the IPI completes the
>>>>> crashing CPU updates its own crash
>>>>> note and proceeds further.
>>>>>
>>>>> The crash notes of CPUs remain uninitialized if the CPUs were
>>>>> offline or hot unplugged at the time
>>>>> system crash. The core-dump analyzer should be able to identify
>>>>> [un]/initialized crash notes
>>>>> and display the information accordingly.
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> - Sourabh
>>>>
>>>> In general, I agree with your points. You've presented a strong case to
>>>> go with for_each_possible_cpu() in crash_prepare_elf64_headers() and
>>>> those crash notes would always be present, and we can ignore changes to
>>>> cpus wrt/ elfcorehdr updates.
>>>>
>>>> But what do we do about kexec_load() syscall? The way the userspace
>>>> utility works is it determines cpus by:
>>>>   nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
>>>> which is not the equivalent of possible_cpus. So the complete list of
>>>> cpu PT_NOTEs is not generated up front. We would need a solution for
>>>> that?
>>> Hello Eric,
>>>
>>> The sysconf document says _SC_NPROCESSORS_CONF is processors configured,
>>> isn't that equivalent to possible CPUs?
>>>
>>> What exactly sysconf(_SC_NPROCESSORS_CONF) returns on x86? IIUC, on powerPC
>>> it is possible CPUs.
>>
> Baoquan,
> 
>>  From sysconf man page, with my understanding, _SC_NPROCESSORS_CONF is
>> returning the possible cpus, while _SC_NPROCESSORS_ONLN returns present
>> cpus. If these are true, we can use them.
> 
> Thomas Gleixner has pointed out that:
> 
>   glibc tries to evaluate that in the following order:
>    1) /sys/devices/system/cpu/cpu*
>       That's present CPUs not possible CPUs
>    2) /proc/stat
>       That's online CPUs
>    3) sched_getaffinity()
>       That's online CPUs at best. In the worst case it's an affinity mask
>       which is set on a process group
> 
> meaning that _SC_NPROCESSORS_CONF is not equivalent to possible_cpus(). Furthermore, the 
> /sys/system/devices/cpus/cpuXX entries are not available for not-present-but-possible cpus; thus 
> userspace kexec utility can not write out the elfcorehdr with all possible cpus listed.
> 
>>
>> But I am wondering why the existing present cpu way is going to be
>> discarded. Sorry, I tried to go through this thread, it's too long, can
>> anyone summarize the reason with shorter and clear sentences. Sorry
>> again for that.
> 
> By utilizing for_each_possible_cpu() in crash_prepare_elf64_headers(), in the case of the 
> kexec_file_load(), this change would simplify some issues Sourabh has encountered for PPC support. 
> It would also enable an optimization that permits NOT re-generating the elfcorehdr on cpu changes, 
> as all the [possible] cpus are already described in the elfcorehdr.
> 
> I've pointed out that this change would have kexec_load (as kexec-tools can only write out, 
> initially, the present_cpus()) initially deviate from kexec_file_load (which would now write out the 
> possible_cpus()). This deviation would disappear after the first hotplug event (due to calling 
> crash_prepare_elf64_headers()). Or I've provided a simple way for kexec_load to rewrite its 
> elfcorehdr upon initial load (by calling into the crash hotplug handler).
> 
> Can you think of any side effects of going to for_each_possible_cpu()?
> 
> Thanks,
> eric

Well, this won't be shorter sentences, but hopefully it makes the case clearer. Below I've 
cut-n-pasted my current patch w/ commit message which explains it all.

Please let me know if you can think of any side effects not addressed!
Thanks,
eric

> 
> 
>>
>>>
>>> In case sysconf(_SC_NPROCESSORS_CONF) is not consistent then we can go with:
>>> /sys/devices/system/cpu/possible for kexec_load case.
>>>
>>> Thoughts?
>>>
>>> - Sourabh Jain
>>>
>>

 From b56aa428b07d970f26e3c3704d54ce8805f05ddc Mon Sep 17 00:00:00 2001
From: Eric DeVolder <eric.devolder@oracle.com>
Date: Tue, 28 Feb 2023 14:20:04 -0500
Subject: [PATCH v19 3/7] crash: change crash_prepare_elf64_headers() to
  for_each_possible_cpu()

The function crash_prepare_elf64_headers() generates the elfcorehdr
which describes the cpus and memory in the system for the crash kernel.
In particular, it writes out ELF PT_NOTEs for memory regions and the
processors in the system.

With respect to the cpus, the current implementation utilizes
for_each_present_cpu() which means that as cpus are added and removed,
the elfcorehdr must again be updated to reflect the new set of cpus.

The reasoning behind the change to use for_each_possible_cpu(), is:

- At kernel boot time, all percpu crash_notes are allocated for all
   possible cpus; that is, crash_notes are not allocated dynamically
   when cpus are plugged/unplugged. Thus the crash_notes for each
   possible cpu are always available.

- The crash_prepare_elf64_headers() creates an ELF PT_NOTE per cpu.
   Changing to for_each_possible_cpu() is valid as the crash_notes
   pointed to by each cpu PT_NOTE are present and always valid.

Furthermore, examining a common crash processing path of:

  kernel panic -> crash kernel -> makedumpfile -> 'crash' analyzer
            elfcorehdr      /proc/vmcore     vmcore

reveals how the ELF cpu PT_NOTEs are utilized:

- Upon panic, each cpu is sent an IPI and shuts itself down, recording
  its state in its crash_notes. When all cpus are shutdown, the
  crash kernel is launched with a pointer to the elfcorehdr.

- The crash kernel via linux/fs/proc/vmcore.c does not examine or
  use the contents of the PT_NOTEs, it exposes them via /proc/vmcore.

- The makedumpfile utility uses /proc/vmcore and reads the cpu
  PT_NOTEs to craft a nr_cpus variable, which is reported in a
  header but otherwise generally unused. Makedumpfile creates the
  vmcore.

- The 'crash' dump analyzer does not appear to reference the cpu
  PT_NOTEs. Instead it looks-up the cpu_[possible|present|onlin]_mask
  symbols and directly examines those structure contents from vmcore
  memory. From that information it is able to determine which cpus
  are present and online, and locate the corresponding crash_notes.
  Said differently, it appears to me that 'crash' analyzer does not
  rely on the ELF PT_NOTEs for cpus; rather it obtains the information
  directly via kernel symbols and the memory within the vmcore.

(There maybe other vmcore generating and analysis tools that do use
these PT_NOTEs, but 'makedumpfile' and 'crash' seem to me to be the
most common solution.)

This change results in the benefit of having all cpus described in
the elfcorehdr, and therefore reducing the need to re-generate the
elfcorehdr on cpu changes, at the small expense of an additional
56 bytes per PT_NOTE for not-present-but-possible cpus.

On systems where kexec_file_load() syscall is utilized, all the above
is valid. On systems where kexec_load() syscall is utilized, there
may be the need for the elfcorehdr to be regenerated once. The reason
being that some archs only populate the 'present' cpus in the
/sys/devices/system/cpus entries, which the userspace 'kexec' utility
uses to generate the userspace-supplied elfcorehdr. In this situation,
one memory or cpu change will rewrite the elfcorehdr via the
crash_prepare_elf64_headers() function and now all possible cpus will
be described, just as with kexec_file_load() syscall.

Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Signed-off-by: Eric DeVolder <eric.devolder@oracle.com>
---
  kernel/crash_core.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index dba4b75f7541..537b199a8774 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -365,7 +365,7 @@ int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map,
  	ehdr->e_phentsize = sizeof(Elf64_Phdr);

  	/* Prepare one phdr of type PT_NOTE for each present CPU */
-	for_each_present_cpu(cpu) {
+	for_each_possible_cpu(cpu) {
  		phdr->p_type = PT_NOTE;
  		notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
  		phdr->p_offset = phdr->p_paddr = notes_addr;
  
Sourabh Jain March 2, 2023, 5:23 a.m. UTC | #24
On 01/03/23 00:22, Eric DeVolder wrote:
>
>
> On 2/28/23 06:44, Baoquan He wrote:
>> On 02/13/23 at 10:10am, Sourabh Jain wrote:
>>>
>>> On 11/02/23 06:05, Eric DeVolder wrote:
>>>>
>>>>
>>>> On 2/10/23 00:29, Sourabh Jain wrote:
>>>>>
>>>>> On 10/02/23 01:09, Eric DeVolder wrote:
>>>>>>
>>>>>>
>>>>>> On 2/9/23 12:43, Sourabh Jain wrote:
>>>>>>> Hello Eric,
>>>>>>>
>>>>>>> On 09/02/23 23:01, Eric DeVolder wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2/8/23 07:44, Thomas Gleixner wrote:
>>>>>>>>> Eric!
>>>>>>>>>
>>>>>>>>> On Tue, Feb 07 2023 at 11:23, Eric DeVolder wrote:
>>>>>>>>>> On 2/1/23 05:33, Thomas Gleixner wrote:
>>>>>>>>>>
>>>>>>>>>> So my latest solution is introduce two new CPUHP
>>>>>>>>>> states, CPUHP_AP_ELFCOREHDR_ONLINE
>>>>>>>>>> for onlining and CPUHP_BP_ELFCOREHDR_OFFLINE for
>>>>>>>>>> offlining. I'm open to better names.
>>>>>>>>>>
>>>>>>>>>> The CPUHP_AP_ELFCOREHDR_ONLINE needs to be
>>>>>>>>>> placed after CPUHP_BRINGUP_CPU. My
>>>>>>>>>> attempts at locating this state failed when
>>>>>>>>>> inside the STARTING section, so I located
>>>>>>>>>> this just inside the ONLINE sectoin. The crash
>>>>>>>>>> hotplug handler is registered on
>>>>>>>>>> this state as the callback for the .startup method.
>>>>>>>>>>
>>>>>>>>>> The CPUHP_BP_ELFCOREHDR_OFFLINE needs to be
>>>>>>>>>> placed before CPUHP_TEARDOWN_CPU, and I
>>>>>>>>>> placed it at the end of the PREPARE section.
>>>>>>>>>> This crash hotplug handler is also
>>>>>>>>>> registered on this state as the callback for the .teardown 
>>>>>>>>>> method.
>>>>>>>>>
>>>>>>>>> TBH, that's still overengineered. Something like this:
>>>>>>>>>
>>>>>>>>> bool cpu_is_alive(unsigned int cpu)
>>>>>>>>> {
>>>>>>>>>      struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
>>>>>>>>>
>>>>>>>>>      return data_race(st->state) <= CPUHP_AP_IDLE_DEAD;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> and use this to query the actual state at crash
>>>>>>>>> time. That spares all
>>>>>>>>> those callback heuristics.
>>>>>>>>>
>>>>>>>>>> I'm making my way though percpu crash_notes,
>>>>>>>>>> elfcorehdr, vmcoreinfo,
>>>>>>>>>> makedumpfile and (the consumer of it all) the
>>>>>>>>>> userspace crash utility,
>>>>>>>>>> in order to understand the impact of moving from
>>>>>>>>>> for_each_present_cpu()
>>>>>>>>>> to for_each_online_cpu().
>>>>>>>>>
>>>>>>>>> Is the packing actually worth the trouble? What's the actual win?
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>>           tglx
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Thomas,
>>>>>>>> I've investigated the passing of crash notes through the
>>>>>>>> vmcore. What I've learned is that:
>>>>>>>>
>>>>>>>> - linux/fs/proc/vmcore.c (which makedumpfile references
>>>>>>>> to do its job) does
>>>>>>>>    not care what the contents of cpu PT_NOTES are, but it
>>>>>>>> does coalesce them together.
>>>>>>>>
>>>>>>>> - makedumpfile will count the number of cpu PT_NOTES in
>>>>>>>> order to determine its
>>>>>>>>    nr_cpus variable, which is reported in a header, but
>>>>>>>> otherwise unused (except
>>>>>>>>    for sadump method).
>>>>>>>>
>>>>>>>> - the crash utility, for the purposes of determining the
>>>>>>>> cpus, does not appear to
>>>>>>>>    reference the elfcorehdr PT_NOTEs. Instead it locates the 
>>>>>>>> various
>>>>>>>>    cpu_[possible|present|online]_mask and computes
>>>>>>>> nr_cpus from that, and also of
>>>>>>>>    course which are online. In addition, when crash does
>>>>>>>> reference the cpu PT_NOTE,
>>>>>>>>    to get its prstatus, it does so by using a percpu
>>>>>>>> technique directly in the vmcore
>>>>>>>>    image memory, not via the ELF structure. Said
>>>>>>>> differently, it appears to me that
>>>>>>>>    crash utility doesn't rely on the ELF PT_NOTEs for
>>>>>>>> cpus; rather it obtains them
>>>>>>>>    via kernel cpumasks and the memory within the vmcore.
>>>>>>>>
>>>>>>>> With this understanding, I did some testing. Perhaps the
>>>>>>>> most telling test was that I
>>>>>>>> changed the number of cpu PT_NOTEs emitted in the
>>>>>>>> crash_prepare_elf64_headers() to just 1,
>>>>>>>> hot plugged some cpus, then also took a few offline
>>>>>>>> sparsely via chcpu, then generated a
>>>>>>>> vmcore. The crash utility had no problem loading the
>>>>>>>> vmcore, it reported the proper number
>>>>>>>> of cpus and the number offline (despite only one cpu
>>>>>>>> PT_NOTE), and changing to a different
>>>>>>>> cpu via 'set -c 30' and the backtrace was completely valid.
>>>>>>>>
>>>>>>>> My take away is that crash utility does not rely upon
>>>>>>>> ELF cpu PT_NOTEs, it obtains the
>>>>>>>> cpu information directly from kernel data structures.
>>>>>>>> Perhaps at one time crash relied
>>>>>>>> upon the ELF information, but no more. (Perhaps there
>>>>>>>> are other crash dump analyzers
>>>>>>>> that might rely on the ELF info?)
>>>>>>>>
>>>>>>>> So, all this to say that I see no need to change
>>>>>>>> crash_prepare_elf64_headers(). There
>>>>>>>> is no compelling reason to move away from
>>>>>>>> for_each_present_cpu(), or modify the list for
>>>>>>>> online/offline.
>>>>>>>>
>>>>>>>> Which then leaves the topic of the cpuhp state on which
>>>>>>>> to register. Perhaps reverting
>>>>>>>> back to the use of CPUHP_BP_PREPARE_DYN is the right
>>>>>>>> answer. There does not appear to
>>>>>>>> be a compelling need to accurately track whether the cpu
>>>>>>>> went online/offline for the
>>>>>>>> purposes of creating the elfcorehdr, as ultimately the
>>>>>>>> crash utility pulls that from
>>>>>>>> kernel data structures, not the elfcorehdr.
>>>>>>>>
>>>>>>>> I think this is what Sourabh has known and has been
>>>>>>>> advocating for an optimization
>>>>>>>> path that allows not regenerating the elfcorehdr on cpu
>>>>>>>> changes (because all the percpu
>>>>>>>> structs are all laid out). I do think it best to leave
>>>>>>>> that as an arch choice.
>>>>>>>
>>>>>>> Since things are clear on how the PT_NOTES are consumed in
>>>>>>> kdump kernel [fs/proc/vmcore.c],
>>>>>>> makedumpfile, and crash tool I need your opinion on this:
>>>>>>>
>>>>>>> Do we really need to regenerate elfcorehdr for CPU hotplug events?
>>>>>>> If yes, can you please list the elfcorehdr components that
>>>>>>> changes due to CPU hotplug.
>>>>>> Due to the use of for_each_present_cpu(), it is possible for the
>>>>>> number of cpu PT_NOTEs
>>>>>> to fluctuate as cpus are un/plugged. Onlining/offlining of cpus
>>>>>> does not impact the
>>>>>> number of cpu PT_NOTEs (as the cpus are still present).
>>>>>>
>>>>>>>
>>>>>>>   From what I understood, crash notes are prepared for
>>>>>>> possible CPUs as system boots and
>>>>>>> could be used to create a PT_NOTE section for each possible
>>>>>>> CPU while generating the elfcorehdr
>>>>>>> during the kdump kernel load.
>>>>>>>
>>>>>>> Now once the elfcorehdr is loaded with PT_NOTEs for every
>>>>>>> possible CPU there is no need to
>>>>>>> regenerate it for CPU hotplug events. Or do we?
>>>>>>
>>>>>> For onlining/offlining of cpus, there is no need to regenerate
>>>>>> the elfcorehdr. However,
>>>>>> for actual hot un/plug of cpus, the answer is yes due to
>>>>>> for_each_present_cpu(). The
>>>>>> caveat here of course is that if crash utility is the only
>>>>>> coredump analyzer of concern,
>>>>>> then it doesn't care about these cpu PT_NOTEs and there would be
>>>>>> no need to re-generate them.
>>>>>>
>>>>>> Also, I'm not sure if ARM cpu hotplug, which is just now coming
>>>>>> into mainstream, impacts
>>>>>> any of this.
>>>>>>
>>>>>> Perhaps the one item that might help here is to distinguish
>>>>>> between actual hot un/plug of
>>>>>> cpus, versus onlining/offlining. At the moment, I can not
>>>>>> distinguish between a hot plug
>>>>>> event and an online event (and unplug/offline). If those were
>>>>>> distinguishable, then we
>>>>>> could only regenerate on un/plug events.
>>>>>>
>>>>>> Or perhaps moving to for_each_possible_cpu() is the better choice?
>>>>>
>>>>> Yes, because once elfcorehdr is built with possible CPUs we don't
>>>>> have to worry about
>>>>> hot[un]plug case.
>>>>>
>>>>> Here is my view on how things should be handled if a core-dump
>>>>> analyzer is dependent on
>>>>> elfcorehdr PT_NOTEs to find online/offline CPUs.
>>>>>
>>>>> A PT_NOTE in elfcorehdr holds the address of the corresponding crash
>>>>> notes (kernel has
>>>>> one crash note per CPU for every possible CPU). Though the crash
>>>>> notes are allocated
>>>>> during the boot time they are populated when the system is on the
>>>>> crash path.
>>>>>
>>>>> This is how crash notes are populated on PowerPC and I am expecting
>>>>> it would be something
>>>>> similar on other architectures too.
>>>>>
>>>>> The crashing CPU sends IPI to every other online CPU with a callback
>>>>> function that updates the
>>>>> crash notes of that specific CPU. Once the IPI completes the
>>>>> crashing CPU updates its own crash
>>>>> note and proceeds further.
>>>>>
>>>>> The crash notes of CPUs remain uninitialized if the CPUs were
>>>>> offline or hot unplugged at the time
>>>>> system crash. The core-dump analyzer should be able to identify
>>>>> [un]/initialized crash notes
>>>>> and display the information accordingly.
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> - Sourabh
>>>>
>>>> In general, I agree with your points. You've presented a strong 
>>>> case to
>>>> go with for_each_possible_cpu() in crash_prepare_elf64_headers() and
>>>> those crash notes would always be present, and we can ignore 
>>>> changes to
>>>> cpus wrt/ elfcorehdr updates.
>>>>
>>>> But what do we do about kexec_load() syscall? The way the userspace
>>>> utility works is it determines cpus by:
>>>>   nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
>>>> which is not the equivalent of possible_cpus. So the complete list of
>>>> cpu PT_NOTEs is not generated up front. We would need a solution for
>>>> that?
>>> Hello Eric,
>>>
>>> The sysconf document says _SC_NPROCESSORS_CONF is processors 
>>> configured,
>>> isn't that equivalent to possible CPUs?
>>>
>>> What exactly sysconf(_SC_NPROCESSORS_CONF) returns on x86? IIUC, on 
>>> powerPC
>>> it is possible CPUs.
>>
> Baoquan,
>
>>  From sysconf man page, with my understanding, _SC_NPROCESSORS_CONF is
>> returning the possible cpus, while _SC_NPROCESSORS_ONLN returns present
>> cpus. If these are true, we can use them.
>
> Thomas Gleixner has pointed out that:
>
>  glibc tries to evaluate that in the following order:
>   1) /sys/devices/system/cpu/cpu*
>      That's present CPUs not possible CPUs
>   2) /proc/stat
>      That's online CPUs
>   3) sched_getaffinity()
>      That's online CPUs at best. In the worst case it's an affinity mask
>      which is set on a process group
>
> meaning that _SC_NPROCESSORS_CONF is not equivalent to 
> possible_cpus(). Furthermore, the /sys/system/devices/cpus/cpuXX 
> entries are not available for not-present-but-possible cpus; thus 
> userspace kexec utility can not write out the elfcorehdr with all 
> possible cpus listed.
>
>>
>> But I am wondering why the existing present cpu way is going to be
>> discarded. Sorry, I tried to go through this thread, it's too long, can
>> anyone summarize the reason with shorter and clear sentences. Sorry
>> again for that.
>
Hello Eric,

> By utilizing for_each_possible_cpu() in crash_prepare_elf64_headers(), 
> in the case of the kexec_file_load(), this change would simplify some 
> issues Sourabh has encountered for PPC support.

Things are fine even with for_each_present_cpu on PPC. It is just that I 
want to avoid
the regeneration of elfcorehdr for every CPU change by packing possible 
CPUs at once.


Thanks,
Sourabh Jain
  
Baoquan He March 2, 2023, 10:51 a.m. UTC | #25
On 03/01/23 at 09:48am, Eric DeVolder wrote:
...... 
> From b56aa428b07d970f26e3c3704d54ce8805f05ddc Mon Sep 17 00:00:00 2001
> From: Eric DeVolder <eric.devolder@oracle.com>
> Date: Tue, 28 Feb 2023 14:20:04 -0500
> Subject: [PATCH v19 3/7] crash: change crash_prepare_elf64_headers() to
>  for_each_possible_cpu()
> 
> The function crash_prepare_elf64_headers() generates the elfcorehdr
> which describes the cpus and memory in the system for the crash kernel.
> In particular, it writes out ELF PT_NOTEs for memory regions and the
> processors in the system.
> 
> With respect to the cpus, the current implementation utilizes
> for_each_present_cpu() which means that as cpus are added and removed,
> the elfcorehdr must again be updated to reflect the new set of cpus.
> 
> The reasoning behind the change to use for_each_possible_cpu(), is:
> 
> - At kernel boot time, all percpu crash_notes are allocated for all
>   possible cpus; that is, crash_notes are not allocated dynamically
>   when cpus are plugged/unplugged. Thus the crash_notes for each
>   possible cpu are always available.
> 
> - The crash_prepare_elf64_headers() creates an ELF PT_NOTE per cpu.
>   Changing to for_each_possible_cpu() is valid as the crash_notes
>   pointed to by each cpu PT_NOTE are present and always valid.
> 
> Furthermore, examining a common crash processing path of:
> 
>  kernel panic -> crash kernel -> makedumpfile -> 'crash' analyzer
>            elfcorehdr      /proc/vmcore     vmcore
> 
> reveals how the ELF cpu PT_NOTEs are utilized:
> 
> - Upon panic, each cpu is sent an IPI and shuts itself down, recording
>  its state in its crash_notes. When all cpus are shutdown, the
>  crash kernel is launched with a pointer to the elfcorehdr.
> 
> - The crash kernel via linux/fs/proc/vmcore.c does not examine or
>  use the contents of the PT_NOTEs, it exposes them via /proc/vmcore.
> 
> - The makedumpfile utility uses /proc/vmcore and reads the cpu
>  PT_NOTEs to craft a nr_cpus variable, which is reported in a
>  header but otherwise generally unused. Makedumpfile creates the
>  vmcore.
> 
> - The 'crash' dump analyzer does not appear to reference the cpu
>  PT_NOTEs. Instead it looks-up the cpu_[possible|present|onlin]_mask
>  symbols and directly examines those structure contents from vmcore
>  memory. From that information it is able to determine which cpus
>  are present and online, and locate the corresponding crash_notes.
>  Said differently, it appears to me that 'crash' analyzer does not
>  rely on the ELF PT_NOTEs for cpus; rather it obtains the information
>  directly via kernel symbols and the memory within the vmcore.
> 
> (There maybe other vmcore generating and analysis tools that do use
> these PT_NOTEs, but 'makedumpfile' and 'crash' seem to me to be the
> most common solution.)
> 
> This change results in the benefit of having all cpus described in
> the elfcorehdr, and therefore reducing the need to re-generate the
> elfcorehdr on cpu changes, at the small expense of an additional
> 56 bytes per PT_NOTE for not-present-but-possible cpus.
> 
> On systems where kexec_file_load() syscall is utilized, all the above
> is valid. On systems where kexec_load() syscall is utilized, there
> may be the need for the elfcorehdr to be regenerated once. The reason
> being that some archs only populate the 'present' cpus in the
> /sys/devices/system/cpus entries, which the userspace 'kexec' utility
> uses to generate the userspace-supplied elfcorehdr. In this situation,
> one memory or cpu change will rewrite the elfcorehdr via the
> crash_prepare_elf64_headers() function and now all possible cpus will
> be described, just as with kexec_file_load() syscall.

So, with for_each_possible_cpu(), we don't need to respond to cpu
hotplug event, right? If so, it does bring benefit. While kexec_load
won't benefit from that. So far, it looks not bad.

> 
> Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
> Signed-off-by: Eric DeVolder <eric.devolder@oracle.com>
> ---
>  kernel/crash_core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/crash_core.c b/kernel/crash_core.c
> index dba4b75f7541..537b199a8774 100644
> --- a/kernel/crash_core.c
> +++ b/kernel/crash_core.c
> @@ -365,7 +365,7 @@ int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map,
>  	ehdr->e_phentsize = sizeof(Elf64_Phdr);
> 
>  	/* Prepare one phdr of type PT_NOTE for each present CPU */
> -	for_each_present_cpu(cpu) {
> +	for_each_possible_cpu(cpu) {
>  		phdr->p_type = PT_NOTE;
>  		notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
>  		phdr->p_offset = phdr->p_paddr = notes_addr;
> -- 
> 2.31.1
>
  

Patch

diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index 5545de4597d0..d985d334fae4 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -366,6 +366,14 @@  int crash_prepare_elf64_headers(struct kimage *image, struct crash_mem *mem,
 
 	/* Prepare one phdr of type PT_NOTE for each present CPU */
 	for_each_present_cpu(cpu) {
+#ifdef CONFIG_CRASH_HOTPLUG
+		if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
+			/* Skip the soon-to-be offlined cpu */
+			if ((image->hp_action == KEXEC_CRASH_HP_REMOVE_CPU) &&
+				(cpu == image->offlinecpu))
+				continue;
+		}
+#endif
 		phdr->p_type = PT_NOTE;
 		notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
 		phdr->p_offset = phdr->p_paddr = notes_addr;
@@ -769,6 +777,14 @@  static void handle_hotplug_event(unsigned int hp_action, unsigned int cpu)
 			/* Differentiate between normal load and hotplug update */
 			image->hp_action = hp_action;
 
+			/*
+			 * Record which CPU is being unplugged/offlined, so that it
+			 * is explicitly excluded in crash_prepare_elf64_headers().
+			 */
+			image->offlinecpu =
+				(hp_action == KEXEC_CRASH_HP_REMOVE_CPU) ?
+					cpu : KEXEC_CRASH_HP_INVALID_CPU;
+
 			/* Now invoke arch-specific update handler */
 			arch_crash_handle_hotplug_event(image);