Message ID | 20230117034921.185150-2-bhe@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp1542922wrn; Mon, 16 Jan 2023 19:51:45 -0800 (PST) X-Google-Smtp-Source: AMrXdXtTpaFIZhGRSDARSVbbLkfls+B8doZXKxaRqj8NJgNPZ4qRqjSZaIZrsen+cB7VXkAKjC72 X-Received: by 2002:a17:903:26cb:b0:194:5c63:3630 with SMTP id jg11-20020a17090326cb00b001945c633630mr1952583plb.66.1673927505205; Mon, 16 Jan 2023 19:51:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673927505; cv=none; d=google.com; s=arc-20160816; b=jtuFxKSqUuA0G2lqcymLC5GRuHOC4I9NzB7XzQbi2oD1zV1IC8+AGiy5bZSQKdYq6d sco7AD7U8nlWuMjJw5cSHM7XYmkI7WGxULAZQzA1+sWHeusGJLaMggXKMIj9ybqZtvDl hQstF2HofileCIWiWCKHz/CaVGUG/i2DZw8Cp/Ki4jUzPNsST54u/MuUjGa4CYmGWM4n EXMC9wRxb4YXie5DKWZwC36Ol1TXANzkoSREZ4BdcpMNG2TVsVkupss3RxCSXAW+iwg/ qTnZ+/4fhKpZOdJBu3ZMpMtJ7SrarbMLAxvBinUQA99EE2xiHAFhbgbQiObhg8rkr5wT PFng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XKf0pfjvYxPkjbdKZXYck8pY3VTnEaZbMJ46YyM2HTg=; b=dxgvFlpo+DxYsiEB/MiMYsqpG+OGV9vxB2zWxgacV1zmuEqyoJx7HyN1TSnMOur352 13tc3nkO7knuCgtJwI6ltW4zpCE0YfzkU1PBp5yS8dzwCzX1jsiS2NRRhPOtOnlSznB9 qJbsejaWjjBhDF4o81eZHXFWm6gZ2FBYpcvpEZsyV6w3rUzKTh0kwG5keR9HWiM/KjOa IO2r976k05g/gtNAvSEuJBqjMunA2nQJNSXWE5I+tSJuXQ+3d3yYFd08OqRVXfJr4ski CRkBtO2p69S3ZkuqPWwatWJnxQusgtyx6tPpYhGi9eZqiDfFfC1jfts78Dx10XiTFmgk O1jQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OSAAvxPM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u69-20020a638548000000b0049bb86670d0si11123653pgd.290.2023.01.16.19.51.32; Mon, 16 Jan 2023 19:51:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OSAAvxPM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235488AbjAQDue (ORCPT <rfc822;pfffrao@gmail.com> + 99 others); Mon, 16 Jan 2023 22:50:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235362AbjAQDu2 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 16 Jan 2023 22:50:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEFD923106 for <linux-kernel@vger.kernel.org>; Mon, 16 Jan 2023 19:49:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673927378; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XKf0pfjvYxPkjbdKZXYck8pY3VTnEaZbMJ46YyM2HTg=; b=OSAAvxPM1H5kZyfrebopvU0/mI9/cwdvMw7Czi5zankb4AXwDXTUj0xXPaukKvBr0PNhZb BQhigRws67z9sm/3aL0sJwSzm+YPDDoRuUCzPCY8AiNFjpfw+u/AG+W7eYm9clqDvW3vgC mdFB+3Jv+C4gl6Ul6Mb4zLq/6YFZ2GA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-482-0KrDAEoOOSGLbtIy3AN8Hg-1; Mon, 16 Jan 2023 22:49:35 -0500 X-MC-Unique: 0KrDAEoOOSGLbtIy3AN8Hg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0D407802BF3; Tue, 17 Jan 2023 03:49:35 +0000 (UTC) Received: from fedora.redhat.com (ovpn-12-229.pek2.redhat.com [10.72.12.229]) by smtp.corp.redhat.com (Postfix) with ESMTP id A08321415108; Tue, 17 Jan 2023 03:49:30 +0000 (UTC) From: Baoquan He <bhe@redhat.com> To: linux-kernel@vger.kernel.org Cc: kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will@kernel.org, thunder.leizhen@huawei.com, John.p.donnelly@oracle.com, wangkefeng.wang@huawei.com, Baoquan He <bhe@redhat.com> Subject: [PATCH 1/2] arm64: kdump: simplify the reservation behaviour of crashkernel=,high Date: Tue, 17 Jan 2023 11:49:20 +0800 Message-Id: <20230117034921.185150-2-bhe@redhat.com> In-Reply-To: <20230117034921.185150-1-bhe@redhat.com> References: <20230117034921.185150-1-bhe@redhat.com> MIME-Version: 1.0 Content-type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755240193729857914?= X-GMAIL-MSGID: =?utf-8?q?1755240207815363163?= |
Series |
arm64: kdump: simplify the reservation behaviour of crashkernel=,high
|
|
Commit Message
Baoquan He
Jan. 17, 2023, 3:49 a.m. UTC
On arm64, reservation for 'crashkernel=xM,high' is taken by searching for
suitable memory region up down. If the 'xM' of crashkernel high memory
is reserved from high memory successfully, it will try to reserve
crashkernel low memory later accoringly. Otherwise, it will try to search
low memory area for the 'xM' suitable region.
While we observed an unexpected case where a reserved region crosses the
high and low meomry boundary. E.g on a system with 4G as low memory end,
user added the kernel parameters like: 'crashkernel=512M,high', it could
finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel.
This looks very strange because we have two low memory regions
[4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell
why that happened.
Here, for crashkernel=xM,high, search the high memory for the suitable
region above the high and low memory boundary. If failed, try reserving
the suitable region below the boundary. Like this, the crashkernel high
region will only exist in high memory, and crashkernel low region only
exists in low memory. The reservation behaviour for crashkernel=,high is
clearer and simpler.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/arm64/mm/init.c | 30 +++++++++++++++++++++++-------
1 file changed, 23 insertions(+), 7 deletions(-)
Comments
On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote: > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for > suitable memory region up down. If the 'xM' of crashkernel high memory > is reserved from high memory successfully, it will try to reserve > crashkernel low memory later accoringly. Otherwise, it will try to search > low memory area for the 'xM' suitable region. > > While we observed an unexpected case where a reserved region crosses the > high and low meomry boundary. E.g on a system with 4G as low memory end, > user added the kernel parameters like: 'crashkernel=512M,high', it could > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel. > This looks very strange because we have two low memory regions > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell > why that happened. > > Here, for crashkernel=xM,high, search the high memory for the suitable > region above the high and low memory boundary. If failed, try reserving > the suitable region below the boundary. Like this, the crashkernel high > region will only exist in high memory, and crashkernel low region only > exists in low memory. The reservation behaviour for crashkernel=,high is > clearer and simpler. > > Signed-off-by: Baoquan He <bhe@redhat.com> > --- > arch/arm64/mm/init.c | 30 +++++++++++++++++++++++------- > 1 file changed, 23 insertions(+), 7 deletions(-) > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 58a0bb2c17f1..26a05af2bfa8 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -127,12 +127,13 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) > */ > static void __init reserve_crashkernel(void) > { > - unsigned long long crash_base, crash_size; > - unsigned long long crash_low_size = 0; > + unsigned long long crash_base, crash_size, search_base; > unsigned long long crash_max = CRASH_ADDR_LOW_MAX; > + unsigned long long crash_low_size = 0; > char *cmdline = boot_command_line; > - int ret; > bool fixed_base = false; > + bool high = false; > + int ret; > > if (!IS_ENABLED(CONFIG_KEXEC_CORE)) > return; > @@ -155,7 +156,9 @@ static void __init reserve_crashkernel(void) > else if (ret) > return; > > + search_base = CRASH_ADDR_LOW_MAX; > crash_max = CRASH_ADDR_HIGH_MAX; > + high = true; > } else if (ret || !crash_size) { > /* The specified value is invalid */ > return; > @@ -166,31 +169,44 @@ static void __init reserve_crashkernel(void) > /* User specifies base address explicitly. */ > if (crash_base) { > fixed_base = true; > + search_base = crash_base; > crash_max = crash_base + crash_size; > } > > retry: > crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, > - crash_base, crash_max); > + search_base, crash_max); > if (!crash_base) { > + if (fixed_base) { > + pr_warn("cannot reserve crashkernel region [0x%llx-0x%llx]\n", > + search_base, crash_max); > + return; > + } > + > /* > * If the first attempt was for low memory, fall back to > * high memory, the minimum required low memory will be > * reserved later. > */ > - if (!fixed_base && (crash_max == CRASH_ADDR_LOW_MAX)) { > + if (!high && crash_max == CRASH_ADDR_LOW_MAX) { > crash_max = CRASH_ADDR_HIGH_MAX; > + search_base = CRASH_ADDR_LOW_MAX; > crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; > goto retry; > } > > + if (high && (crash_max == CRASH_ADDR_HIGH_MAX)) { nit: unnecessary (and inconsistent with code just above) parentheses. > + crash_max = CRASH_ADDR_LOW_MAX; > + search_base = 0; > + goto retry; > + } > pr_warn("cannot allocate crashkernel (size:0x%llx)\n", > crash_size); > return; > } > > - if ((crash_base > CRASH_ADDR_LOW_MAX - crash_low_size) && > - crash_low_size && reserve_crashkernel_low(crash_low_size)) { > + if ((crash_base >= CRASH_ADDR_LOW_MAX) && crash_low_size && > + reserve_crashkernel_low(crash_low_size)) { > memblock_phys_free(crash_base, crash_size); > return; > } > -- > 2.34.1 > > > _______________________________________________ > kexec mailing list > kexec@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/kexec >
On 01/20/23 at 10:04am, Simon Horman wrote: > On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote: > > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for > > suitable memory region up down. If the 'xM' of crashkernel high memory > > is reserved from high memory successfully, it will try to reserve > > crashkernel low memory later accoringly. Otherwise, it will try to search > > low memory area for the 'xM' suitable region. > > > > While we observed an unexpected case where a reserved region crosses the > > high and low meomry boundary. E.g on a system with 4G as low memory end, > > user added the kernel parameters like: 'crashkernel=512M,high', it could > > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel. > > This looks very strange because we have two low memory regions > > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell > > why that happened. > > > > Here, for crashkernel=xM,high, search the high memory for the suitable > > region above the high and low memory boundary. If failed, try reserving > > the suitable region below the boundary. Like this, the crashkernel high > > region will only exist in high memory, and crashkernel low region only > > exists in low memory. The reservation behaviour for crashkernel=,high is > > clearer and simpler. > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > --- > > arch/arm64/mm/init.c | 30 +++++++++++++++++++++++------- > > 1 file changed, 23 insertions(+), 7 deletions(-) > > > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > > index 58a0bb2c17f1..26a05af2bfa8 100644 > > --- a/arch/arm64/mm/init.c > > +++ b/arch/arm64/mm/init.c > > @@ -127,12 +127,13 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) > > */ > > static void __init reserve_crashkernel(void) > > { > > - unsigned long long crash_base, crash_size; > > - unsigned long long crash_low_size = 0; > > + unsigned long long crash_base, crash_size, search_base; > > unsigned long long crash_max = CRASH_ADDR_LOW_MAX; > > + unsigned long long crash_low_size = 0; > > char *cmdline = boot_command_line; > > - int ret; > > bool fixed_base = false; > > + bool high = false; > > + int ret; > > > > if (!IS_ENABLED(CONFIG_KEXEC_CORE)) > > return; > > @@ -155,7 +156,9 @@ static void __init reserve_crashkernel(void) > > else if (ret) > > return; > > > > + search_base = CRASH_ADDR_LOW_MAX; > > crash_max = CRASH_ADDR_HIGH_MAX; > > + high = true; > > } else if (ret || !crash_size) { > > /* The specified value is invalid */ > > return; > > @@ -166,31 +169,44 @@ static void __init reserve_crashkernel(void) > > /* User specifies base address explicitly. */ > > if (crash_base) { > > fixed_base = true; > > + search_base = crash_base; > > crash_max = crash_base + crash_size; > > } > > > > retry: > > crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, > > - crash_base, crash_max); > > + search_base, crash_max); > > if (!crash_base) { > > + if (fixed_base) { > > + pr_warn("cannot reserve crashkernel region [0x%llx-0x%llx]\n", > > + search_base, crash_max); > > + return; > > + } > > + > > /* > > * If the first attempt was for low memory, fall back to > > * high memory, the minimum required low memory will be > > * reserved later. > > */ > > - if (!fixed_base && (crash_max == CRASH_ADDR_LOW_MAX)) { > > + if (!high && crash_max == CRASH_ADDR_LOW_MAX) { > > crash_max = CRASH_ADDR_HIGH_MAX; > > + search_base = CRASH_ADDR_LOW_MAX; > > crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; > > goto retry; > > } > > > > + if (high && (crash_max == CRASH_ADDR_HIGH_MAX)) { > > nit: unnecessary (and inconsistent with code just above) parentheses. Indeed, will remove it. Thanks for reviewing. > > > + crash_max = CRASH_ADDR_LOW_MAX; > > + search_base = 0; > > + goto retry; > > + } > > pr_warn("cannot allocate crashkernel (size:0x%llx)\n", > > crash_size); > > return; > > } > > > > - if ((crash_base > CRASH_ADDR_LOW_MAX - crash_low_size) && > > - crash_low_size && reserve_crashkernel_low(crash_low_size)) { > > + if ((crash_base >= CRASH_ADDR_LOW_MAX) && crash_low_size && > > + reserve_crashkernel_low(crash_low_size)) { > > memblock_phys_free(crash_base, crash_size); > > return; > > } > > -- > > 2.34.1 > > > > > > _______________________________________________ > > kexec mailing list > > kexec@lists.infradead.org > > http://lists.infradead.org/mailman/listinfo/kexec > > >
On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote: > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for > suitable memory region up down. If the 'xM' of crashkernel high memory > is reserved from high memory successfully, it will try to reserve > crashkernel low memory later accoringly. Otherwise, it will try to search > low memory area for the 'xM' suitable region. > > While we observed an unexpected case where a reserved region crosses the > high and low meomry boundary. E.g on a system with 4G as low memory end, > user added the kernel parameters like: 'crashkernel=512M,high', it could > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel. > This looks very strange because we have two low memory regions > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell > why that happened. > > Here, for crashkernel=xM,high, search the high memory for the suitable > region above the high and low memory boundary. If failed, try reserving > the suitable region below the boundary. Like this, the crashkernel high > region will only exist in high memory, and crashkernel low region only > exists in low memory. The reservation behaviour for crashkernel=,high is > clearer and simpler. Well, I guess it depends on how you look at the 'high' option: is it permitting to go into high addresses or forcing high addresses only? IIUC the x86 implementation has a similar behaviour to the arm64 one, it allows allocation across boundary. What x86 seems to do though is that if crash_base of the high allocation is below 4G, it gives up on further low allocation. On arm64 we had this initially but improved it slightly to check whether the low allocation is of sufficient size. In your example above, it is 126MB instead of 128MB, hence an explicit low allocation. Is the only problem that some users get confused? I don't see this as a significant issue. However, with your patch, there is a potential failure if there isn't sufficient memory to accommodate the request in either high or low ranges.
On 01/24/23 at 05:36pm, Catalin Marinas wrote: > On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote: > > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for > > suitable memory region up down. If the 'xM' of crashkernel high memory > > is reserved from high memory successfully, it will try to reserve > > crashkernel low memory later accoringly. Otherwise, it will try to search > > low memory area for the 'xM' suitable region. > > > > While we observed an unexpected case where a reserved region crosses the > > high and low meomry boundary. E.g on a system with 4G as low memory end, > > user added the kernel parameters like: 'crashkernel=512M,high', it could > > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel. > > This looks very strange because we have two low memory regions > > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell > > why that happened. > > > > Here, for crashkernel=xM,high, search the high memory for the suitable > > region above the high and low memory boundary. If failed, try reserving > > the suitable region below the boundary. Like this, the crashkernel high > > region will only exist in high memory, and crashkernel low region only > > exists in low memory. The reservation behaviour for crashkernel=,high is > > clearer and simpler. > Thanks for looking into this. Please see inline comments. > Well, I guess it depends on how you look at the 'high' option: is it > permitting to go into high addresses or forcing high addresses only? > IIUC the x86 implementation has a similar behaviour to the arm64 one, it > allows allocation across boundary. Hmm, x86 has no chance to allocate a memory region across 4G boundary because it reserves many small regions to map firmware, pci bus, etc near 4G. E.g one x86 system has /proc/iomem as below. I haven't seen a x86 system which doesn't look like this. [root@ ~]# cat /proc/iomem 00000000-00000fff : Reserved 00001000-0009fbff : System RAM 0009fc00-0009ffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000c93ff : Video ROM 000c9800-000ca5ff : Adapter ROM 000ca800-000ccbff : Adapter ROM 000f0000-000fffff : Reserved 000f0000-000fffff : System ROM 00100000-bffeffff : System RAM 73200000-74001b07 : Kernel code 74200000-74bebfff : Kernel rodata 74c00000-75167cbf : Kernel data 758a4000-75ffffff : Kernel bss af000000-beffffff : Crash kernel bfff0000-bfffffff : Reserved c0000000-febfffff : PCI Bus 0000:00 fc000000-fdffffff : 0000:00:02.0 fc000000-fdffffff : cirrus feb80000-febbffff : 0000:00:03.0 febd0000-febd0fff : 0000:00:02.0 febd0000-febd0fff : cirrus febd1000-febd1fff : 0000:00:03.0 febd2000-febd2fff : 0000:00:04.0 febd3000-febd3fff : 0000:00:06.0 febd4000-febd4fff : 0000:00:07.0 febd5000-febd5fff : 0000:00:08.0 febd6000-febd6fff : 0000:00:09.0 febd7000-febd7fff : 0000:00:0a.0 fec00000-fec003ff : IOAPIC 0 fee00000-fee00fff : Local APIC feffc000-feffffff : Reserved fffc0000-ffffffff : Reserved 100000000-13fffffff : System RAM > What x86 seems to do though is that if crash_base of the high allocation > is below 4G, it gives up on further low allocation. On arm64 we had this > initially but improved it slightly to check whether the low allocation > is of sufficient size. In your example above, it is 126MB instead of > 128MB, hence an explicit low allocation. Right. From code, x86 tries to allocate crashkernel high reion top down. If crashkernel high region is above 4G, it reserves 128M for crashkernel low. If it only allocates region under 4G, no further action. But arm64 allocates crashkernel high memory top down and could cross the 4G boudary. This will bring 3 issues: 1) For crashkernel=x,high, it could get crashkernel high region across 4G boudary. Then user will see two memory regions under 4G, and one memory region above 4G. The two low memory regions are confusing. 2) If people explicityly specify "crashkernel=x,high crashkernel=y,low" and y <= 128M, e.g "crashkernel=256M,high crashkernel=64M,low", when crashkernel high region crosses 4G boudary and the part below 4G of crashkernel high reservation is bigger than y, the expected crahskernel low reservation will be skipped. But the expected crashkernel high reservation is shrank and could not satisfy user space requirement. 3) The crossing boundary behaviour of crahskernel high reservation is different than x86 arch. From distros point of view, this brings inconsistency and confusion. Users need to dig into x86 and arm64 details to find out why. For upstream kernel dev and maintainers, issue 3) could be a slight impaction. While issue 1) and 2) cause actual affect. With a small code change to fix this, we can get simpler, more understandable crashkernel=,high reservation behaviour. > > Is the only problem that some users get confused? I don't see this as a > significant issue. However, with your patch, there is a potential > failure if there isn't sufficient memory to accommodate the request in > either high or low ranges. I think we don't need to worry about the potential failure. Before, w/o crashkernel=,high support, no matter how large the system memory is, you can only reserve crashkernel memory under 4G. With crashkernel=,high support, we don't have the limitation. If one system can only satisfy crashkernel reservation across 4G boudary, I think she/he need consider to decrease the value of crashkernel=,high and try again. However, the corssing boundary reservation for crashkernel high region could bring obscure semantics and behaviour, that is a problem we should fix. Thanks Baoquan
On Wed, Feb 01, 2023 at 01:57:17PM +0800, Baoquan He wrote: > On 01/24/23 at 05:36pm, Catalin Marinas wrote: > > On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote: > > > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for > > > suitable memory region up down. If the 'xM' of crashkernel high memory > > > is reserved from high memory successfully, it will try to reserve > > > crashkernel low memory later accoringly. Otherwise, it will try to search > > > low memory area for the 'xM' suitable region. > > > > > > While we observed an unexpected case where a reserved region crosses the > > > high and low meomry boundary. E.g on a system with 4G as low memory end, > > > user added the kernel parameters like: 'crashkernel=512M,high', it could > > > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel. > > > This looks very strange because we have two low memory regions > > > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell > > > why that happened. > > > > > > Here, for crashkernel=xM,high, search the high memory for the suitable > > > region above the high and low memory boundary. If failed, try reserving > > > the suitable region below the boundary. Like this, the crashkernel high > > > region will only exist in high memory, and crashkernel low region only > > > exists in low memory. The reservation behaviour for crashkernel=,high is > > > clearer and simpler. > > > > Well, I guess it depends on how you look at the 'high' option: is it > > permitting to go into high addresses or forcing high addresses only? > > IIUC the x86 implementation has a similar behaviour to the arm64 one, it > > allows allocation across boundary. > > Hmm, x86 has no chance to allocate a memory region across 4G boundary > because it reserves many small regions to map firmware, pci bus, etc > near 4G. E.g one x86 system has /proc/iomem as below. I haven't seen a > x86 system which doesn't look like this. > > [root@ ~]# cat /proc/iomem [...] > fffc0000-ffffffff : Reserved > 100000000-13fffffff : System RAM Ah, that's why we don't see this problem on x86. Alright, for consistency I'm fine with having the same logic on arm64. I guess we don't need the additional check on whether the 'high' allocation reserved at least 128MB in the 'low' range. If it succeeded and the start is below 4GB, it's guaranteed that it got the full allocation in the 'low' range. I haven't checked whether your patch cleaned this up already, if not please do in the next version. And as already asked, please fold the comments with the same patch, it's easier to read.
On 02/01/23 at 05:07pm, Catalin Marinas wrote: > On Wed, Feb 01, 2023 at 01:57:17PM +0800, Baoquan He wrote: > > On 01/24/23 at 05:36pm, Catalin Marinas wrote: > > > On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote: > > > > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for > > > > suitable memory region up down. If the 'xM' of crashkernel high memory > > > > is reserved from high memory successfully, it will try to reserve > > > > crashkernel low memory later accoringly. Otherwise, it will try to search > > > > low memory area for the 'xM' suitable region. > > > > > > > > While we observed an unexpected case where a reserved region crosses the > > > > high and low meomry boundary. E.g on a system with 4G as low memory end, > > > > user added the kernel parameters like: 'crashkernel=512M,high', it could > > > > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel. > > > > This looks very strange because we have two low memory regions > > > > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell > > > > why that happened. > > > > > > > > Here, for crashkernel=xM,high, search the high memory for the suitable > > > > region above the high and low memory boundary. If failed, try reserving > > > > the suitable region below the boundary. Like this, the crashkernel high > > > > region will only exist in high memory, and crashkernel low region only > > > > exists in low memory. The reservation behaviour for crashkernel=,high is > > > > clearer and simpler. > > > > > > Well, I guess it depends on how you look at the 'high' option: is it > > > permitting to go into high addresses or forcing high addresses only? > > > IIUC the x86 implementation has a similar behaviour to the arm64 one, it > > > allows allocation across boundary. > > > > Hmm, x86 has no chance to allocate a memory region across 4G boundary > > because it reserves many small regions to map firmware, pci bus, etc > > near 4G. E.g one x86 system has /proc/iomem as below. I haven't seen a > > x86 system which doesn't look like this. > > > > [root@ ~]# cat /proc/iomem > [...] > > fffc0000-ffffffff : Reserved > > 100000000-13fffffff : System RAM > > Ah, that's why we don't see this problem on x86. > > Alright, for consistency I'm fine with having the same logic on arm64. I > guess we don't need the additional check on whether the 'high' > allocation reserved at least 128MB in the 'low' range. If it succeeded > and the start is below 4GB, it's guaranteed that it got the full > allocation in the 'low' range. I haven't checked whether your patch > cleaned this up already, if not please do in the next version. Yes, that checking has been cleaned away in this patch. > > And as already asked, please fold the comments with the same patch, it's > easier to read. Sure, will do. Thanks a lot for reviewing.
Hi Catalin, On 02/01/23 at 05:07pm, Catalin Marinas wrote: > On Wed, Feb 01, 2023 at 01:57:17PM +0800, Baoquan He wrote: > > On 01/24/23 at 05:36pm, Catalin Marinas wrote: > > > On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote: > > > > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for > > > > suitable memory region up down. If the 'xM' of crashkernel high memory > > > > is reserved from high memory successfully, it will try to reserve > > > > crashkernel low memory later accoringly. Otherwise, it will try to search > > > > low memory area for the 'xM' suitable region. > > > > > > > > While we observed an unexpected case where a reserved region crosses the > > > > high and low meomry boundary. E.g on a system with 4G as low memory end, > > > > user added the kernel parameters like: 'crashkernel=512M,high', it could > > > > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel. > > > > This looks very strange because we have two low memory regions > > > > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell > > > > why that happened. > > > > > > > > Here, for crashkernel=xM,high, search the high memory for the suitable > > > > region above the high and low memory boundary. If failed, try reserving > > > > the suitable region below the boundary. Like this, the crashkernel high > > > > region will only exist in high memory, and crashkernel low region only > > > > exists in low memory. The reservation behaviour for crashkernel=,high is > > > > clearer and simpler. > > > > > > Well, I guess it depends on how you look at the 'high' option: is it > > > permitting to go into high addresses or forcing high addresses only? > > > IIUC the x86 implementation has a similar behaviour to the arm64 one, it > > > allows allocation across boundary. > > > > Hmm, x86 has no chance to allocate a memory region across 4G boundary > > because it reserves many small regions to map firmware, pci bus, etc > > near 4G. E.g one x86 system has /proc/iomem as below. I haven't seen a > > x86 system which doesn't look like this. > > > > [root@ ~]# cat /proc/iomem > [...] > > fffc0000-ffffffff : Reserved > > 100000000-13fffffff : System RAM > > Ah, that's why we don't see this problem on x86. > > Alright, for consistency I'm fine with having the same logic on arm64. I > guess we don't need the additional check on whether the 'high' > allocation reserved at least 128MB in the 'low' range. If it succeeded > and the start is below 4GB, it's guaranteed that it got the full > allocation in the 'low' range. I haven't checked whether your patch > cleaned this up already, if not please do in the next version. > > And as already asked, please fold the comments with the same patch, it's > easier to read. I have updated patch according to you and Simon's suggestion, and resend v2. By the way, could you please have a look at below patchset, to see what solution we should take to solve the spotted problem on arm64? === arm64, kdump: enforce to take 4G as the crashkernel low memory end https://lore.kernel.org/all/20220828005545.94389-1-bhe@redhat.com/T/#u After thorough discussion, I think the problem and root cuase have been very clear to us. However, which way to choose to solve it haven't been decided. In our distors, RHEL and Fedora, we enabed both CONFIG_ZONE_DMA CONFIG_ZONE_DMA32 by default, need set crashkernel= in cmdline. And we don't set 'rodata=' kernel parameter unless have to. I am fine with taking off the protection on crashkernel region, or taking the way where my patchset is done. Thanks Baoquan
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 58a0bb2c17f1..26a05af2bfa8 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -127,12 +127,13 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) */ static void __init reserve_crashkernel(void) { - unsigned long long crash_base, crash_size; - unsigned long long crash_low_size = 0; + unsigned long long crash_base, crash_size, search_base; unsigned long long crash_max = CRASH_ADDR_LOW_MAX; + unsigned long long crash_low_size = 0; char *cmdline = boot_command_line; - int ret; bool fixed_base = false; + bool high = false; + int ret; if (!IS_ENABLED(CONFIG_KEXEC_CORE)) return; @@ -155,7 +156,9 @@ static void __init reserve_crashkernel(void) else if (ret) return; + search_base = CRASH_ADDR_LOW_MAX; crash_max = CRASH_ADDR_HIGH_MAX; + high = true; } else if (ret || !crash_size) { /* The specified value is invalid */ return; @@ -166,31 +169,44 @@ static void __init reserve_crashkernel(void) /* User specifies base address explicitly. */ if (crash_base) { fixed_base = true; + search_base = crash_base; crash_max = crash_base + crash_size; } retry: crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, - crash_base, crash_max); + search_base, crash_max); if (!crash_base) { + if (fixed_base) { + pr_warn("cannot reserve crashkernel region [0x%llx-0x%llx]\n", + search_base, crash_max); + return; + } + /* * If the first attempt was for low memory, fall back to * high memory, the minimum required low memory will be * reserved later. */ - if (!fixed_base && (crash_max == CRASH_ADDR_LOW_MAX)) { + if (!high && crash_max == CRASH_ADDR_LOW_MAX) { crash_max = CRASH_ADDR_HIGH_MAX; + search_base = CRASH_ADDR_LOW_MAX; crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; goto retry; } + if (high && (crash_max == CRASH_ADDR_HIGH_MAX)) { + crash_max = CRASH_ADDR_LOW_MAX; + search_base = 0; + goto retry; + } pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); return; } - if ((crash_base > CRASH_ADDR_LOW_MAX - crash_low_size) && - crash_low_size && reserve_crashkernel_low(crash_low_size)) { + if ((crash_base >= CRASH_ADDR_LOW_MAX) && crash_low_size && + reserve_crashkernel_low(crash_low_size)) { memblock_phys_free(crash_base, crash_size); return; }