Message ID | cover.1698077459.git.andreyknvl@google.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1408438vqx; Mon, 23 Oct 2023 09:23:21 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGGLwwOzFBAbN1Ae6hy5/WO6Il24AkOShj6l1Vf3rinZvmSLioQPb1Z8dSl9B8ux8ltFDK8 X-Received: by 2002:a05:6a20:42a6:b0:17a:e941:b0a3 with SMTP id o38-20020a056a2042a600b0017ae941b0a3mr60297pzj.39.1698078200765; Mon, 23 Oct 2023 09:23:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698078200; cv=none; d=google.com; s=arc-20160816; b=vcPE5EnHDul8XcCAoj/jLxx7v5CsQmKwbjLLLkOx8pTp5RgytYzvgs0DqvW3IEF5IX UH8WTuYm8XNtuYdfIfl5GcUBsYutgDGgEX2Oev4Ni4U1ekB7dPp++Gm9Pj0+99TASh3E 9i2ynuPQDwwFPhF4E77Gb1AapimDzRsZP1nNJC2dG1JRj9gE+jcGMOAnfJ/OR0TCSBk2 c8M8ZlPe+giQkoTWFbFtV6LHNqgIhp14J77/FhiM9YinZPWLJP543u9amAnG3dYgziTJ w959WFHoBjYAkA5wJvoaELZbFpKtfvhlpD7Lxi6TIF9eJ8RvYocSeAr0PJfBOzHtFz8E CqiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=hE8/xihIXcLGI8uaU/L34q0MmNF0Ny3eNAOyLz5owLs=; fh=nfQSbTp1dWHt2Ier1Up8UhVNdXOqoJJLNvTrpT3jJEk=; b=IoiBZ94kRQMxpXS+XvMDcXww5mzf8VV6uf2EiX1mo4rFbCDFKrpX/kmn/zdCwEmM9w iE+q4p0yuyAaRgMWi1qGQisVWJ8Lp9+eHbT8RCJqhEajicQTI/rSCqkG/Q354OQsLbFA DaK2XQcDTNvqV7yFa9gKKW5cM1zEcj6QTkeVjYbz+EEz8hIEP82DnUpilxgbDekAKNpZ 00ahkZ7eh7t3a5/kaWyYOSfe3W9qFhp4o5R2Hi1w3PC2RKQD+xe+Pv3hm2CkJB+JCeaa VPamTMquwno+ylgYG+TM9yYX73tYa0UgEfG0RtxxUkkYu7IOH6weS7RuW+xvCC8YpP4Q FABg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=en6BEB2N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id g22-20020a056a001a1600b006bd9c159a98si6931627pfv.186.2023.10.23.09.23.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Oct 2023 09:23:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=en6BEB2N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id ABE42804E737; Mon, 23 Oct 2023 09:23:19 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231147AbjJWQXN (ORCPT <rfc822;aposhian.dev@gmail.com> + 27 others); Mon, 23 Oct 2023 12:23:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229714AbjJWQXF (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 23 Oct 2023 12:23:05 -0400 Received: from out-205.mta0.migadu.com (out-205.mta0.migadu.com [91.218.175.205]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3946DE4 for <linux-kernel@vger.kernel.org>; Mon, 23 Oct 2023 09:23:00 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698078177; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=hE8/xihIXcLGI8uaU/L34q0MmNF0Ny3eNAOyLz5owLs=; b=en6BEB2NL2TkfIZ+o5d1ARfK4S/MJ2EVXaSRfOwcHIC+VIjAnj5g0Y6cuUN98kV9kwFDmT j7o0G98RL+sezKiMqYGkyZRIp0E/6tdKgsZRBac3ovUz5AAb1SmV8jOb1lrg5LGLAFU5an /uiVY0J+58ucGMH80sj+fzrZvHg1EoE= From: andrey.konovalov@linux.dev To: Marco Elver <elver@google.com>, Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com>, Dmitry Vyukov <dvyukov@google.com>, Vlastimil Babka <vbabka@suse.cz>, kasan-dev@googlegroups.com, Evgenii Stepanov <eugenis@google.com>, Oscar Salvador <osalvador@suse.de>, Andrew Morton <akpm@linux-foundation.org>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov <andreyknvl@google.com> Subject: [PATCH v3 00/19] stackdepot: allow evicting stack traces Date: Mon, 23 Oct 2023 18:22:31 +0200 Message-Id: <cover.1698077459.git.andreyknvl@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 23 Oct 2023 09:23:19 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780564047389117559 X-GMAIL-MSGID: 1780564047389117559 |
Series |
stackdepot: allow evicting stack traces
|
|
Message
andrey.konovalov@linux.dev
Oct. 23, 2023, 4:22 p.m. UTC
From: Andrey Konovalov <andreyknvl@google.com>
Currently, the stack depot grows indefinitely until it reaches its
capacity. Once that happens, the stack depot stops saving new stack
traces.
This creates a problem for using the stack depot for in-field testing
and in production.
For such uses, an ideal stack trace storage should:
1. Allow saving fresh stack traces on systems with a large uptime while
limiting the amount of memory used to store the traces;
2. Have a low performance impact.
Implementing #1 in the stack depot is impossible with the current
keep-forever approach. This series targets to address that. Issue #2 is
left to be addressed in a future series.
This series changes the stack depot implementation to allow evicting
unneeded stack traces from the stack depot. The users of the stack depot
can do that via new stack_depot_save_flags(STACK_DEPOT_FLAG_GET) and
stack_depot_put APIs.
Internal changes to the stack depot code include:
1. Storing stack traces in fixed-frame-sized slots; the slot size is
controlled via CONFIG_STACKDEPOT_MAX_FRAMES (vs precisely-sized
slots in the current implementation);
2. Keeping available slots in a freelist (vs keeping an offset to the next
free slot);
3. Using a read/write lock for synchronization (vs a lock-free approach
combined with a spinlock).
This series also integrates the eviction functionality in the tag-based
KASAN modes.
Despite wasting some space on rounding up the size of each stack record,
with CONFIG_STACKDEPOT_MAX_FRAMES=32, the tag-based KASAN modes end up
consuming ~5% less memory in stack depot during boot (with the default
stack ring size of 32k entries). The reason for this is the eviction of
irrelevant stack traces from the stack depot, which frees up space for
other stack traces.
For other tools that heavily rely on the stack depot, like Generic KASAN
and KMSAN, this change leads to the stack depot capacity being reached
sooner than before. However, as these tools are mainly used in fuzzing
scenarios where the kernel is frequently rebooted, this outcome should
be acceptable.
There is no measurable boot time performance impact of these changes for
KASAN on x86-64. I haven't done any tests for arm64 modes (the stack
depot without performance optimizations is not suitable for intended use
of those anyway), but I expect a similar result. Obtaining and copying
stack trace frames when saving them into stack depot is what takes the
most time.
This series does not yet provide a way to configure the maximum size of
the stack depot externally (e.g. via a command-line parameter). This will
be added in a separate series, possibly together with the performance
improvement changes.
---
Changes v2->v3:
- Fix null-ptr-deref by using the proper number of entries for
initializing the stack table when alloc_large_system_hash()
auto-calculates the number (see patch #12).
- Keep STACKDEPOT/STACKDEPOT_ALWAYS_INIT Kconfig options not configurable
by users.
- Use lockdep_assert_held_read annotation in depot_fetch_stack.
- WARN_ON invalid flags in stack_depot_save_flags.
- Moved "../slab.h" include in mm/kasan/report_tags.c in the right patch.
- Various comment fixes.
Changes v1->v2:
- Rework API to stack_depot_save_flags(STACK_DEPOT_FLAG_GET) +
stack_depot_put.
- Add CONFIG_STACKDEPOT_MAX_FRAMES Kconfig option.
- Switch stack depot to using list_head's.
- Assorted minor changes, see the commit message for each path.
Andrey Konovalov (19):
lib/stackdepot: check disabled flag when fetching
lib/stackdepot: simplify __stack_depot_save
lib/stackdepot: drop valid bit from handles
lib/stackdepot: add depot_fetch_stack helper
lib/stackdepot: use fixed-sized slots for stack records
lib/stackdepot: fix and clean-up atomic annotations
lib/stackdepot: rework helpers for depot_alloc_stack
lib/stackdepot: rename next_pool_required to new_pool_required
lib/stackdepot: store next pool pointer in new_pool
lib/stackdepot: store free stack records in a freelist
lib/stackdepot: use read/write lock
lib/stackdepot: use list_head for stack record links
kmsan: use stack_depot_save instead of __stack_depot_save
lib/stackdepot, kasan: add flags to __stack_depot_save and rename
lib/stackdepot: add refcount for records
lib/stackdepot: allow users to evict stack traces
kasan: remove atomic accesses to stack ring entries
kasan: check object_size in kasan_complete_mode_report_info
kasan: use stack_depot_put for tag-based modes
include/linux/stackdepot.h | 59 ++++--
lib/Kconfig | 10 +
lib/stackdepot.c | 418 ++++++++++++++++++++++++-------------
mm/kasan/common.c | 7 +-
mm/kasan/generic.c | 9 +-
mm/kasan/kasan.h | 2 +-
mm/kasan/report_tags.c | 27 +--
mm/kasan/tags.c | 24 ++-
mm/kmsan/core.c | 7 +-
9 files changed, 365 insertions(+), 198 deletions(-)
Comments
On 2023-10-23 18:22, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@google.com> > > Currently, the stack depot grows indefinitely until it reaches its > capacity. Once that happens, the stack depot stops saving new stack > traces. > > This creates a problem for using the stack depot for in-field testing > and in production. > > For such uses, an ideal stack trace storage should: > > 1. Allow saving fresh stack traces on systems with a large uptime while > limiting the amount of memory used to store the traces; > 2. Have a low performance impact. > > Implementing #1 in the stack depot is impossible with the current > keep-forever approach. This series targets to address that. Issue #2 is > left to be addressed in a future series. > > This series changes the stack depot implementation to allow evicting > unneeded stack traces from the stack depot. The users of the stack depot > can do that via new stack_depot_save_flags(STACK_DEPOT_FLAG_GET) and > stack_depot_put APIs. > > Internal changes to the stack depot code include: > > 1. Storing stack traces in fixed-frame-sized slots; the slot size is > controlled via CONFIG_STACKDEPOT_MAX_FRAMES (vs precisely-sized > slots in the current implementation); > 2. Keeping available slots in a freelist (vs keeping an offset to the next > free slot); > 3. Using a read/write lock for synchronization (vs a lock-free approach > combined with a spinlock). > > This series also integrates the eviction functionality in the tag-based > KASAN modes. > > Despite wasting some space on rounding up the size of each stack record, > with CONFIG_STACKDEPOT_MAX_FRAMES=32, the tag-based KASAN modes end up > consuming ~5% less memory in stack depot during boot (with the default > stack ring size of 32k entries). The reason for this is the eviction of > irrelevant stack traces from the stack depot, which frees up space for > other stack traces. > > For other tools that heavily rely on the stack depot, like Generic KASAN > and KMSAN, this change leads to the stack depot capacity being reached > sooner than before. However, as these tools are mainly used in fuzzing > scenarios where the kernel is frequently rebooted, this outcome should > be acceptable. > > There is no measurable boot time performance impact of these changes for > KASAN on x86-64. I haven't done any tests for arm64 modes (the stack > depot without performance optimizations is not suitable for intended use > of those anyway), but I expect a similar result. Obtaining and copying > stack trace frames when saving them into stack depot is what takes the > most time. > > This series does not yet provide a way to configure the maximum size of > the stack depot externally (e.g. via a command-line parameter). This will > be added in a separate series, possibly together with the performance > improvement changes. > > --- > > Changes v2->v3: > - Fix null-ptr-deref by using the proper number of entries for > initializing the stack table when alloc_large_system_hash() > auto-calculates the number (see patch #12). > - Keep STACKDEPOT/STACKDEPOT_ALWAYS_INIT Kconfig options not configurable > by users. > - Use lockdep_assert_held_read annotation in depot_fetch_stack. > - WARN_ON invalid flags in stack_depot_save_flags. > - Moved "../slab.h" include in mm/kasan/report_tags.c in the right patch. > - Various comment fixes. > > Changes v1->v2: > - Rework API to stack_depot_save_flags(STACK_DEPOT_FLAG_GET) + > stack_depot_put. > - Add CONFIG_STACKDEPOT_MAX_FRAMES Kconfig option. > - Switch stack depot to using list_head's. > - Assorted minor changes, see the commit message for each path. > > Andrey Konovalov (19): > lib/stackdepot: check disabled flag when fetching > lib/stackdepot: simplify __stack_depot_save > lib/stackdepot: drop valid bit from handles > lib/stackdepot: add depot_fetch_stack helper > lib/stackdepot: use fixed-sized slots for stack records > lib/stackdepot: fix and clean-up atomic annotations > lib/stackdepot: rework helpers for depot_alloc_stack > lib/stackdepot: rename next_pool_required to new_pool_required > lib/stackdepot: store next pool pointer in new_pool > lib/stackdepot: store free stack records in a freelist > lib/stackdepot: use read/write lock > lib/stackdepot: use list_head for stack record links > kmsan: use stack_depot_save instead of __stack_depot_save > lib/stackdepot, kasan: add flags to __stack_depot_save and rename > lib/stackdepot: add refcount for records > lib/stackdepot: allow users to evict stack traces > kasan: remove atomic accesses to stack ring entries > kasan: check object_size in kasan_complete_mode_report_info > kasan: use stack_depot_put for tag-based modes Tested-by: Anders Roxell <anders.roxell@linaro.org> Applied this patchset to linux-next tag next-20231023 and built an arm64 kernel and that booted fine in QEMU. Cheers, Anders
On Mon, 23 Oct 2023 at 18:22, <andrey.konovalov@linux.dev> wrote: [...] > --- > > Changes v2->v3: > - Fix null-ptr-deref by using the proper number of entries for > initializing the stack table when alloc_large_system_hash() > auto-calculates the number (see patch #12). > - Keep STACKDEPOT/STACKDEPOT_ALWAYS_INIT Kconfig options not configurable > by users. > - Use lockdep_assert_held_read annotation in depot_fetch_stack. > - WARN_ON invalid flags in stack_depot_save_flags. > - Moved "../slab.h" include in mm/kasan/report_tags.c in the right patch. > - Various comment fixes. > > Changes v1->v2: > - Rework API to stack_depot_save_flags(STACK_DEPOT_FLAG_GET) + > stack_depot_put. > - Add CONFIG_STACKDEPOT_MAX_FRAMES Kconfig option. > - Switch stack depot to using list_head's. > - Assorted minor changes, see the commit message for each path. > > Andrey Konovalov (19): > lib/stackdepot: check disabled flag when fetching > lib/stackdepot: simplify __stack_depot_save > lib/stackdepot: drop valid bit from handles > lib/stackdepot: add depot_fetch_stack helper > lib/stackdepot: use fixed-sized slots for stack records 1. I know fixed-sized slots are need for eviction to work, but have you evaluated if this causes some excessive memory waste now? Or is it negligible? If it turns out to be a problem, one way out would be to partition the freelist into stack size classes; e.g. one for each of stack traces of size 8, 16, 32, 64. > lib/stackdepot: fix and clean-up atomic annotations > lib/stackdepot: rework helpers for depot_alloc_stack > lib/stackdepot: rename next_pool_required to new_pool_required > lib/stackdepot: store next pool pointer in new_pool > lib/stackdepot: store free stack records in a freelist > lib/stackdepot: use read/write lock 2. I still think switching to the percpu_rwsem right away is the right thing, and not actually a downside. I mentioned this before, but you promised a follow-up patch, so I trust that this will happen. ;-) > lib/stackdepot: use list_head for stack record links > kmsan: use stack_depot_save instead of __stack_depot_save > lib/stackdepot, kasan: add flags to __stack_depot_save and rename > lib/stackdepot: add refcount for records > lib/stackdepot: allow users to evict stack traces > kasan: remove atomic accesses to stack ring entries > kasan: check object_size in kasan_complete_mode_report_info > kasan: use stack_depot_put for tag-based modes > > include/linux/stackdepot.h | 59 ++++-- > lib/Kconfig | 10 + > lib/stackdepot.c | 418 ++++++++++++++++++++++++------------- > mm/kasan/common.c | 7 +- > mm/kasan/generic.c | 9 +- > mm/kasan/kasan.h | 2 +- > mm/kasan/report_tags.c | 27 +-- > mm/kasan/tags.c | 24 ++- > mm/kmsan/core.c | 7 +- > 9 files changed, 365 insertions(+), 198 deletions(-) Acked-by: Marco Elver <elver@google.com> The series looks good in its current state. However, see my 2 higher-level comments above.
On Tue, Oct 24, 2023 at 3:14 PM Marco Elver <elver@google.com> wrote: > > 1. I know fixed-sized slots are need for eviction to work, but have > you evaluated if this causes some excessive memory waste now? Or is it > negligible? With the current default stack depot slot size of 64 frames, a single stack trace takes up ~3-4x on average compared to precisely sized slots (KMSAN is closer to ~4x due to its 3-frame-sized linking records). However, as the tag-based KASAN modes evict old stack traces, the average total amount of memory used for stack traces is ~0.5 MB (with the default stack ring size of 32k entries). I also have just mailed an eviction implementation for Generic KASAN. With it, the stack traces take up ~1 MB per 1 GB of RAM while running syzkaller (stack traces are evicted when they are flushed from quarantine, and quarantine's size depends on the amount of RAM.) The only problem is KMSAN. Based on a discussion with Alexander, it might not be possible to implement the eviction for it. So I suspect, with this change, syzbot might run into the capacity WARNING from time to time. The simplest solution would be to bump the maximum size of stack depot storage to x4 if KMSAN is enabled (to 512 MB from the current 128 MB). KMSAN requires a significant amount of RAM for shadow anyway. Would that be acceptable? > If it turns out to be a problem, one way out would be to partition the > freelist into stack size classes; e.g. one for each of stack traces of > size 8, 16, 32, 64. This shouldn't be hard to implement. However, as one of the perf improvements, I'm thinking of saving a stack trace directly into a stack depot slot (to avoid copying it). With this, we won't know the stack trace size before it is saved. So this won't work together with the size classes. > 2. I still think switching to the percpu_rwsem right away is the right > thing, and not actually a downside. I mentioned this before, but you > promised a follow-up patch, so I trust that this will happen. ;-) First thing on my TODO list wrt perf improvements :) > Acked-by: Marco Elver <elver@google.com> > > The series looks good in its current state. However, see my 2 > higher-level comments above. Thank you!
On Fri, Nov 3, 2023 at 10:37 PM Andrey Konovalov <andreyknvl@gmail.com> wrote: > > On Tue, Oct 24, 2023 at 3:14 PM Marco Elver <elver@google.com> wrote: > > > > 1. I know fixed-sized slots are need for eviction to work, but have > > you evaluated if this causes some excessive memory waste now? Or is it > > negligible? > > With the current default stack depot slot size of 64 frames, a single > stack trace takes up ~3-4x on average compared to precisely sized > slots (KMSAN is closer to ~4x due to its 3-frame-sized linking > records). > > However, as the tag-based KASAN modes evict old stack traces, the > average total amount of memory used for stack traces is ~0.5 MB (with > the default stack ring size of 32k entries). > > I also have just mailed an eviction implementation for Generic KASAN. > With it, the stack traces take up ~1 MB per 1 GB of RAM while running > syzkaller (stack traces are evicted when they are flushed from > quarantine, and quarantine's size depends on the amount of RAM.) > > The only problem is KMSAN. Based on a discussion with Alexander, it > might not be possible to implement the eviction for it. So I suspect, > with this change, syzbot might run into the capacity WARNING from time > to time. > > The simplest solution would be to bump the maximum size of stack depot > storage to x4 if KMSAN is enabled (to 512 MB from the current 128 MB). > KMSAN requires a significant amount of RAM for shadow anyway. > > Would that be acceptable? > > > If it turns out to be a problem, one way out would be to partition the > > freelist into stack size classes; e.g. one for each of stack traces of > > size 8, 16, 32, 64. > > This shouldn't be hard to implement. > > However, as one of the perf improvements, I'm thinking of saving a > stack trace directly into a stack depot slot (to avoid copying it). > With this, we won't know the stack trace size before it is saved. So > this won't work together with the size classes. On a second thought, saving stack traces directly into a stack depot slot will require taking the write lock, which will badly affect performance, or using some other elaborate locking scheme, which might be an overkill. > > 2. I still think switching to the percpu_rwsem right away is the right > > thing, and not actually a downside. I mentioned this before, but you > > promised a follow-up patch, so I trust that this will happen. ;-) > > First thing on my TODO list wrt perf improvements :) > > > Acked-by: Marco Elver <elver@google.com> > > > > The series looks good in its current state. However, see my 2 > > higher-level comments above. > > Thank you!