Message ID | 20230510-dt-resv-bottom-up-v1-1-3bf68873dbed@gerhold.net |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp6815562vqo; Mon, 15 May 2023 03:34:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5TumL4yeHheCksWqV2vf372JE1TObu4P8SXT6CgSLXMoEat7vhp2+cqqkqH3TBuI3EBXef X-Received: by 2002:a17:902:d48d:b0:1ac:8835:b89b with SMTP id c13-20020a170902d48d00b001ac8835b89bmr30890441plg.5.1684146883418; Mon, 15 May 2023 03:34:43 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1684146883; cv=pass; d=google.com; s=arc-20160816; b=IpKjhACfdNSSxr21RW7XWHy+EvV7qgtHcChNFQ43G4r8jWLrB6kr8O1pn7Y6uzbt2q SK3WQVwcTm2VxRxCmNzz60FbsOqp+mxTEmqS5qrFiHdXcjcYDE3BHq6VukiPiF41r2wU HZAdD96HZsQV9YQ1XkK0y78aIONiaiL/s5AuFREB+BXUgny6pm3YAjFOD9QunTPBhPdc NbxEZiLU0sixtNkBCeCV2ilB0GecO7mot4hk3QjjzFHPDr/girHk6ihKJAVz656q3dvq hd3DTNMcf5W9w+MoAkHNf86ud68E/kxcuzsGwK5fXJ+KT1VLrpuJ2hTNjM3ZdKUytrje 0hRQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature:dkim-signature; bh=olVuiJZGfA/LHV3YVLMLjpbt55lnzWTZbOuO9FJhuMI=; b=fokvkibepV2YNghinW4kUFxUwotzrgiknt27/mZwI7OoxnbdAngaEEpJVPxR7bTJEN VLgVmmoGBUrbrRRJPFm3ZpeR152JigaEvBWDQyCbEbvalEJl2PuijNXmASQ+hkjAoiuY bkqH/rutF/0Q3FM97uXA12d8a+GZo89PB7qRMZNqipVvvSKV8qBZxwL70jEIFJ1zAyba 612jW9Q9cYMCy9vmZ2CdS4aq9h8O8csW9TBl0RsgnO8WyRnjPTOsT1aCopXh+F+u2Chf WaF/pmL/32BbjeEyGsnuE4AxnAMjgPpSlJiw1blDqHR5XHXDh1JwDRh4SW2MtQ2+lvZS xqfw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gerhold.net header.s=strato-dkim-0002 header.b=CY+o6P9V; dkim=neutral (no key) header.i=@gerhold.net header.b=NpzOsiBC; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s28-20020a63af5c000000b00530b7eca08esi5237359pgo.340.2023.05.15.03.34.29; Mon, 15 May 2023 03:34:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gerhold.net header.s=strato-dkim-0002 header.b=CY+o6P9V; dkim=neutral (no key) header.i=@gerhold.net header.b=NpzOsiBC; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241144AbjEOKQ1 (ORCPT <rfc822;peekingduck44@gmail.com> + 99 others); Mon, 15 May 2023 06:16:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240832AbjEOKQM (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 15 May 2023 06:16:12 -0400 X-Greylist: delayed 179 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Mon, 15 May 2023 03:15:35 PDT Received: from mo4-p02-ob.smtp.rzone.de (mo4-p02-ob.smtp.rzone.de [85.215.255.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E02BC2106; Mon, 15 May 2023 03:15:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684145544; cv=none; d=strato.com; s=strato-dkim-0002; b=qDTIIAzqDyNFmO0GjIMWdlTAsPfxoi9c54XrSPnPIBCD+/Ac0NqjPIo6RX6ZXYFA/v XfSRhLCFFqaZY+hM+ieOq8rjSYgktwQ/tfFSeaZAmb+l7MHOBSN4ag8OibJxs/nNzGuJ yqHI9U60zAdGQoDotldrbSoLHJ3sxR3Qfu9o4COGYCqhsQq6l6N9z7Kw7UQS2+hRBwIR ftSNtfx19eFQlt3El2cFRuG67B9/TXphBjO+Sjj2KW4jf6hH/KJ9cnxjcR4SGfqgcLPZ RArsEHlfPp6fstmF4Zj5Agy2rxlSMF7RV1jvuhngOzoXFykStQDrUs8J8MweXQjExpqD ybsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684145544; s=strato-dkim-0002; d=strato.com; h=Cc:To:In-Reply-To:References:Message-Id:Subject:Date:From:Cc:Date: From:Subject:Sender; bh=olVuiJZGfA/LHV3YVLMLjpbt55lnzWTZbOuO9FJhuMI=; b=V6mauyZYvxwo5w3sMUAKskAbdeyyXVMqzWDRWwTuyaCbU9/14gd3OZUCy4fSob6NBl +pP2fjAi05cVDIRS9WVjjmdDh4HOuGTqMstUiaOGL+XIs9QGySlmIgEiCO4M1q/1iGSu MWPKD5uWoPvJ8KS524PAe+iZXzdMYRXYiM20K4LDW40/PHJ2AMGETzRzF5TxMXQxXVGa u3uIVWYGvYN4z1wyYRYEN69RSAS6OQjxcWqlhYXj9so/o/hJH0q/I11WFB7yU5AJ5Ta0 q9xTCt4ceGjb82ozounPElMKglNLribywWcnBbr5wYmG7TgOygKjw0qkUhK+O4y4CM4Z xfyg== ARC-Authentication-Results: i=1; strato.com; arc=none; dkim=none X-RZG-CLASS-ID: mo02 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684145544; s=strato-dkim-0002; d=gerhold.net; h=Cc:To:In-Reply-To:References:Message-Id:Subject:Date:From:Cc:Date: From:Subject:Sender; bh=olVuiJZGfA/LHV3YVLMLjpbt55lnzWTZbOuO9FJhuMI=; b=CY+o6P9VjlLTzgmoC0ukNBftxBPWTp+7p5Sf63HXc0kUdrbe51QDEhkdp9Wtei8gCx 36mkGqS1MxlBxUd2aUZm8VSzzNvMDQ+iEyrYqhhDomIXlgCGy0lYIQlTuJVh/7vTZXjH LZk02fO22jf3tEZDtUtM/hEzd/CoSxwcSfng63j1v3SMB17uUs5bzuub7PAMNgvhSk5h fVs9gKoT23x5tolvWfYrBeCcMwEdRGVXI+f17rwhgYErxVvZpiA+WYm8KF5vlZFlNfSK JKBYgV4xLhWiG84cXENjPysczqq8CwbFkGjrxDqLv8EBWXhxPuxZ28E+Od8VshjfWQ3m NQpA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684145544; s=strato-dkim-0003; d=gerhold.net; h=Cc:To:In-Reply-To:References:Message-Id:Subject:Date:From:Cc:Date: From:Subject:Sender; bh=olVuiJZGfA/LHV3YVLMLjpbt55lnzWTZbOuO9FJhuMI=; b=NpzOsiBC7Df5KrzmYokWRVug7C5LvTnshPFSpfjZg+SXou+lgjK2J8KTa7xrsVuGtr lptCIGHXxBH9BsowsSBg== X-RZG-AUTH: ":P3gBZUipdd93FF5ZZvYFPugejmSTVR2nRPhVOQjVd4CteZ/7jYgS+mLFY+H0JAn8u4ly9TY=" Received: from [192.168.244.3] by smtp.strato.de (RZmta 49.4.0 DYNA|AUTH) with ESMTPSA id j6420az4FACO1JG (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits)) (Client did not present a certificate); Mon, 15 May 2023 12:12:24 +0200 (CEST) From: Stephan Gerhold <stephan@gerhold.net> Date: Mon, 15 May 2023 12:12:16 +0200 Subject: [PATCH 1/5] dt-bindings: reserved-memory: Add alloc-{bottom-up,top-down} MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20230510-dt-resv-bottom-up-v1-1-3bf68873dbed@gerhold.net> References: <20230510-dt-resv-bottom-up-v1-0-3bf68873dbed@gerhold.net> In-Reply-To: <20230510-dt-resv-bottom-up-v1-0-3bf68873dbed@gerhold.net> To: Rob Herring <robh+dt@kernel.org>, Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>, Conor Dooley <conor+dt@kernel.org>, Frank Rowand <frowand.list@gmail.com> Cc: Andy Gross <agross@kernel.org>, Bjorn Andersson <andersson@kernel.org>, Konrad Dybcio <konrad.dybcio@linaro.org>, devicetree@vger.kernel.org, devicetree-spec@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, Stephan Gerhold <stephan@gerhold.net> X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_PASS,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765956002194445469?= X-GMAIL-MSGID: =?utf-8?q?1765956002194445469?= |
Series |
of: reserved_mem: Provide more control about allocation behavior
|
|
Commit Message
Stephan Gerhold
May 15, 2023, 10:12 a.m. UTC
Right now the allocation behavior for dynamic reserved memory is
implementation-defined. On Linux it is dependent on the architecture.
This is usually fine if the address is completely arbitrary.
However, when using "alloc-ranges" it is helpful to allow controlling
this. That way you can make sure that the reservations are placed next
to other (static) allocations to keep the free memory contiguous if
possible.
Signed-off-by: Stephan Gerhold <stephan@gerhold.net>
---
.../bindings/reserved-memory/reserved-memory.yaml | 39 ++++++++++++++++++++++
1 file changed, 39 insertions(+)
Comments
On Mon, May 15, 2023 at 12:12:16PM +0200, Stephan Gerhold wrote: > Right now the allocation behavior for dynamic reserved memory is > implementation-defined. On Linux it is dependent on the architecture. > This is usually fine if the address is completely arbitrary. > > However, when using "alloc-ranges" it is helpful to allow controlling > this. That way you can make sure that the reservations are placed next > to other (static) allocations to keep the free memory contiguous if > possible. That should already be possible with all the information you already have. IOW, you are looking at all the region and "alloc-ranges" addresses to decide top-down or bottom-up. Why can't the kernel do that. Alternatively, if you really care about the allocation locations, don't use dynamic regions. > > Signed-off-by: Stephan Gerhold <stephan@gerhold.net> > --- > .../bindings/reserved-memory/reserved-memory.yaml | 39 ++++++++++++++++++++++ > 1 file changed, 39 insertions(+) > > diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > index c680e397cfd2..56f4bc6137e7 100644 > --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > @@ -52,6 +52,18 @@ properties: > Address and Length pairs. Specifies regions of memory that are > acceptable to allocate from. > > + alloc-bottom-up: > + type: boolean > + description: > > + Specifies that the memory region should be preferably allocated > + at the lowest available address within the "alloc-ranges" region. > + > + alloc-top-down: > + type: boolean > + description: > > + Specifies that the memory region should be preferably allocated > + at the highest available address within the "alloc-ranges" region. What happens when both are set? > + > iommu-addresses: > $ref: /schemas/types.yaml#/definitions/phandle-array > description: > > @@ -93,6 +105,10 @@ properties: > system can use that region to store volatile or cached data that > can be otherwise regenerated or migrated elsewhere. > > +dependencies: > + alloc-bottom-up: [alloc-ranges] > + alloc-top-down: [alloc-ranges] > + > allOf: > - if: > required: > @@ -178,4 +194,27 @@ examples: > }; > }; > }; > + > + - | > + / { > + compatible = "foo"; > + model = "foo"; > + > + #address-cells = <2>; > + #size-cells = <2>; > + > + reserved-memory { > + #address-cells = <2>; > + #size-cells = <2>; > + ranges; > + > + adsp_mem: adsp { > + size = <0x0 0x600000>; > + alignment = <0x0 0x100000>; > + alloc-ranges = <0x0 0x86800000 0x0 0x10000000>; > + alloc-bottom-up; > + no-map; > + }; > + }; > + }; > ... > > -- > 2.40.1 >
Hi Rob, Thanks for your suggestions! On Thu, Jun 08, 2023 at 08:02:56AM -0600, Rob Herring wrote: > On Mon, May 15, 2023 at 12:12:16PM +0200, Stephan Gerhold wrote: > > Right now the allocation behavior for dynamic reserved memory is > > implementation-defined. On Linux it is dependent on the architecture. > > This is usually fine if the address is completely arbitrary. > > > > However, when using "alloc-ranges" it is helpful to allow controlling > > this. That way you can make sure that the reservations are placed next > > to other (static) allocations to keep the free memory contiguous if > > possible. > > That should already be possible with all the information you > already have. IOW, you are looking at all the region and "alloc-ranges" > addresses to decide top-down or bottom-up. Why can't the kernel do that. > Would you accept a patch implementing such a behavior? There are obviously infinitely complicated algorithms possible for the allocation. A fairly simple one would be to check if the "alloc-ranges" overlap or are adjacent to an already existing reservation, i.e. 1. If the "alloc-range" starts at the end or inside an existing reservation, use bottom-up. 2. If the "alloc-range" ends at the start or inside an existing reservation, use top-down. 3. If both or none is the case, keep current (implementation-defined) behavior. For reference, here are some examples how it behaves. |...| is the unallocated memory, RRR existing allocations, and each RRR--- line below a requested alloc-range (and where it was allocated): Bottom-up (rule 1): |.....RRRR................RRRRRRRRR...........| RRR---- ---RRR------- Top-down (rule 2): |.....RRRR................RRRRRRRRR...........| ----RRR ---------RRR------ Otherwise rule 3 just behaves as currently where either bottom-up or top-down is used depending on the implementation/architecture: |.....RRRR................RRRRRRRRR...........| -----RRR or RRR----- ---------------RRR---- or --RRR----------------- There are plenty of edge cases where it doesn't produce the optimal result, but it just results in exactly the same behavior as currently so it's not any worse (with rule 3): |.....RRRR................RRRRRRRRR...........| -----------RRR----- or ----------------RRR ---------------------RRR (no way to handle this or RRR--------------------- with top-down/bottom-up) > Alternatively, if you really care about the allocation locations, don't > use dynamic regions. > Yes, this is the option used at the moment. As outlined in detail in the examples of RFC PATCH 4/5 and 5/5 I would like a solution inbetween. The exact address doesn't matter but the way (direction) the region is filled should preferably stay the same. > > > > Signed-off-by: Stephan Gerhold <stephan@gerhold.net> > > --- > > .../bindings/reserved-memory/reserved-memory.yaml | 39 ++++++++++++++++++++++ > > 1 file changed, 39 insertions(+) > > > > diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > > index c680e397cfd2..56f4bc6137e7 100644 > > --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > > +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > > @@ -52,6 +52,18 @@ properties: > > Address and Length pairs. Specifies regions of memory that are > > acceptable to allocate from. > > > > + alloc-bottom-up: > > + type: boolean > > + description: > > > + Specifies that the memory region should be preferably allocated > > + at the lowest available address within the "alloc-ranges" region. > > + > > + alloc-top-down: > > + type: boolean > > + description: > > > + Specifies that the memory region should be preferably allocated > > + at the highest available address within the "alloc-ranges" region. > > What happens when both are set? > They are not meant to be both set. I should have added an if statement for this, sorry about that. Thanks, Stephan
On Fri, Jun 09, 2023 at 11:16:01AM +0200, Stephan Gerhold wrote: > Hi Rob, > > Thanks for your suggestions! > > On Thu, Jun 08, 2023 at 08:02:56AM -0600, Rob Herring wrote: > > On Mon, May 15, 2023 at 12:12:16PM +0200, Stephan Gerhold wrote: > > > Right now the allocation behavior for dynamic reserved memory is > > > implementation-defined. On Linux it is dependent on the architecture. > > > This is usually fine if the address is completely arbitrary. > > > > > > However, when using "alloc-ranges" it is helpful to allow controlling > > > this. That way you can make sure that the reservations are placed next > > > to other (static) allocations to keep the free memory contiguous if > > > possible. > > > > That should already be possible with all the information you > > already have. IOW, you are looking at all the region and "alloc-ranges" > > addresses to decide top-down or bottom-up. Why can't the kernel do that. > > > > Would you accept a patch implementing such a behavior? Yes. > There are obviously infinitely complicated algorithms possible for the > allocation. A fairly simple one would be to check if the "alloc-ranges" > overlap or are adjacent to an already existing reservation, i.e. > > 1. If the "alloc-range" starts at the end or inside an existing > reservation, use bottom-up. > 2. If the "alloc-range" ends at the start or inside an existing > reservation, use top-down. > 3. If both or none is the case, keep current (implementation-defined) > behavior. > > For reference, here are some examples how it behaves. |...| is the > unallocated memory, RRR existing allocations, and each RRR--- line > below a requested alloc-range (and where it was allocated): > > Bottom-up (rule 1): > |.....RRRR................RRRRRRRRR...........| > RRR---- > ---RRR------- > > Top-down (rule 2): > |.....RRRR................RRRRRRRRR...........| > ----RRR > ---------RRR------ > > Otherwise rule 3 just behaves as currently where either bottom-up > or top-down is used depending on the implementation/architecture: > |.....RRRR................RRRRRRRRR...........| > -----RRR > or RRR----- > ---------------RRR---- > or --RRR----------------- > > There are plenty of edge cases where it doesn't produce the optimal > result, but it just results in exactly the same behavior as currently > so it's not any worse (with rule 3): > > |.....RRRR................RRRRRRRRR...........| > -----------RRR----- > or ----------------RRR > ---------------------RRR (no way to handle this > or RRR--------------------- with top-down/bottom-up) > > > Alternatively, if you really care about the allocation locations, don't > > use dynamic regions. > > > > Yes, this is the option used at the moment. As outlined in detail in the > examples of RFC PATCH 4/5 and 5/5 I would like a solution inbetween. The > exact address doesn't matter but the way (direction) the region is > filled should preferably stay the same. > > > > > > > Signed-off-by: Stephan Gerhold <stephan@gerhold.net> > > > --- > > > .../bindings/reserved-memory/reserved-memory.yaml | 39 ++++++++++++++++++++++ > > > 1 file changed, 39 insertions(+) > > > > > > diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > > > index c680e397cfd2..56f4bc6137e7 100644 > > > --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > > > +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml > > > @@ -52,6 +52,18 @@ properties: > > > Address and Length pairs. Specifies regions of memory that are > > > acceptable to allocate from. > > > > > > + alloc-bottom-up: > > > + type: boolean > > > + description: > > > > + Specifies that the memory region should be preferably allocated > > > + at the lowest available address within the "alloc-ranges" region. > > > + > > > + alloc-top-down: > > > + type: boolean > > > + description: > > > > + Specifies that the memory region should be preferably allocated > > > + at the highest available address within the "alloc-ranges" region. > > > > What happens when both are set? > > > > They are not meant to be both set. I should have added an if statement > for this, sorry about that. Ideally, you define the properties in a way to avoid that situation rather than relying on schema checks. For example, a single property with values defined for top-down and bottom-up. Rob
diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml index c680e397cfd2..56f4bc6137e7 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.yaml @@ -52,6 +52,18 @@ properties: Address and Length pairs. Specifies regions of memory that are acceptable to allocate from. + alloc-bottom-up: + type: boolean + description: > + Specifies that the memory region should be preferably allocated + at the lowest available address within the "alloc-ranges" region. + + alloc-top-down: + type: boolean + description: > + Specifies that the memory region should be preferably allocated + at the highest available address within the "alloc-ranges" region. + iommu-addresses: $ref: /schemas/types.yaml#/definitions/phandle-array description: > @@ -93,6 +105,10 @@ properties: system can use that region to store volatile or cached data that can be otherwise regenerated or migrated elsewhere. +dependencies: + alloc-bottom-up: [alloc-ranges] + alloc-top-down: [alloc-ranges] + allOf: - if: required: @@ -178,4 +194,27 @@ examples: }; }; }; + + - | + / { + compatible = "foo"; + model = "foo"; + + #address-cells = <2>; + #size-cells = <2>; + + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + ranges; + + adsp_mem: adsp { + size = <0x0 0x600000>; + alignment = <0x0 0x100000>; + alloc-ranges = <0x0 0x86800000 0x0 0x10000000>; + alloc-bottom-up; + no-map; + }; + }; + }; ...