From patchwork Fri Dec 9 15:11:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 31821 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp837040wrr; Fri, 9 Dec 2022 07:21:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf737xNLvohRE3VF0pJJ0ecwv2Y6GjsBap102se5WB/UV12t6YYnP9iHK3zMwuliwwxjMfS3 X-Received: by 2002:a17:906:17d7:b0:78d:f454:ba14 with SMTP id u23-20020a17090617d700b0078df454ba14mr5641289eje.19.1670599275222; Fri, 09 Dec 2022 07:21:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670599275; cv=none; d=google.com; s=arc-20160816; b=g30/444jsd6JSo6VI/WYgR7lwiowckHVV4nFOeJ5HYLy+n6uEMNI0yq64wILJCBBqc Ghknhiw1n2d3bMxz6dOsIImPWtUfFrdij95MHU8B2otTgsYeM/cnEbU34yukXPwbtJgN BKT9ide3jPS3JFaJlztNplKhgQUZlKjlahuEDO8/JIXSjciDrm3PkjItTidhZwuVia9b dasW2bPuVVVidAe/ohFXTfVsATi7uFV/utbFdBJTqpe3h/GOAKJVwrjRKbzhy0xqYN8E SyCfFvk0mHVwY5mmBmB0GkbAWCUBulIBkGqXEl1eijSqlP2fg1W2SYRq3RXOtXBTxmSY i+4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:subject:cc:to:from:date; bh=BSVTNQG3QxBzHSW8Q919ij7OBnxPHNTo+TZMKw9tPE4=; b=cg8dRbZHtvoRa3ase4i1wgg5l/HgH8hNy95KKLshwwpuI2yPEd1NxKIrS/V5hW7cqC 2lMaKCXttZxCHWi0eIOM1xJOn77RWJbrq8QNOVAGP4gkzP2OvjmN17Ingfk2ZjtbcOyI HGtfM8662MF6+8/neFs2L8xfTtdG2bc0c+RlDRZpV26lYrCwNmabF1iQG0AbSX7j7JcD QDGhRiikeebW27zJLhpCFUkVOWlt/kCOxxGQTP/Fs4Ma7YtmHeSJE0U7fvTTR8TdsDhI NEZ+xecijgfjDAq6n2RmfWdn7mLSd0WTCyAbOjg8nNuG+9VJByFGDVFShkr0O6PQs8n+ 0G1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id tc1-20020a1709078d0100b007b27aecaf82si40371ejc.274.2022.12.09.07.20.50; Fri, 09 Dec 2022 07:21:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230215AbiLIPM3 (ORCPT + 99 others); Fri, 9 Dec 2022 10:12:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230246AbiLIPL6 (ORCPT ); Fri, 9 Dec 2022 10:11:58 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4146084B5E; Fri, 9 Dec 2022 07:11:57 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 8C3D0CE28AF; Fri, 9 Dec 2022 15:11:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E9C8C433F1; Fri, 9 Dec 2022 15:11:52 +0000 (UTC) Date: Fri, 9 Dec 2022 10:11:51 -0500 From: Steven Rostedt To: LKML Cc: Linux Trace Kernel , Masami Hiramatsu , Andrew Morton , Ross Zwisler Subject: [PATCH] ring-buffer: Handle resize in early boot up Message-ID: <20221209101151.1fec1167@gandalf.local.home> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-Spam-Status: No, score=-6.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751750305633356769?= X-GMAIL-MSGID: =?utf-8?q?1751750305633356769?= From: Steven Rostedt With the new command line option that allows trace event triggers to be added at boot, the "snapshot" trigger will allocate the snapshot buffer very early, when interrupts can not be enabled. Allocating the ring buffer is not the problem, but it also resizes it, which is, as the resize code does synchronization that can not be preformed at early boot. To handle this, first change the raw_spin_lock_irq() in rb_insert_pages() to raw_spin_lock_irqsave(), such that the unlocking of that spin lock will not enable interrupts. Next, where it calls schedule_work_on(), disable migration and check if the CPU to update is the current CPU, and if so, perform the work directly, otherwise re-enable migration and call the schedule_work_on() to the CPU that is being updated. The rb_insert_pages() just needs to be run on the CPU that it is updating, and does not need preemption nor interrupts disabled when calling it. Link: https://lore.kernel.org/lkml/Y5J%2FCajlNh1gexvo@google.com/ Fixes: a01fdc897fa5 ("tracing: Add trace_trigger kernel command line option") Reported-by: Ross Zwisler Signed-off-by: Steven Rostedt Tested-by: Ross Zwisler --- kernel/trace/ring_buffer.c | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 843818ee4814..c366a0a9ddba 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -2062,8 +2062,10 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) { struct list_head *pages = &cpu_buffer->new_pages; int retries, success; + unsigned long flags; - raw_spin_lock_irq(&cpu_buffer->reader_lock); + /* Can be called at early boot up, where interrupts must not been enabled */ + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); /* * We are holding the reader lock, so the reader page won't be swapped * in the ring buffer. Now we are racing with the writer trying to @@ -2120,7 +2122,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) * tracing */ RB_WARN_ON(cpu_buffer, !success); - raw_spin_unlock_irq(&cpu_buffer->reader_lock); + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); /* free pages if they weren't inserted */ if (!success) { @@ -2248,8 +2250,16 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, rb_update_pages(cpu_buffer); cpu_buffer->nr_pages_to_update = 0; } else { - schedule_work_on(cpu, - &cpu_buffer->update_pages_work); + /* Run directly if possible. */ + migrate_disable(); + if (cpu != smp_processor_id()) { + migrate_enable(); + schedule_work_on(cpu, + &cpu_buffer->update_pages_work); + } else { + update_pages_handler(&cpu_buffer->update_pages_work); + migrate_enable(); + } } } @@ -2298,9 +2308,17 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, if (!cpu_online(cpu_id)) rb_update_pages(cpu_buffer); else { - schedule_work_on(cpu_id, - &cpu_buffer->update_pages_work); - wait_for_completion(&cpu_buffer->update_done); + /* Run directly if possible. */ + migrate_disable(); + if (cpu_id == smp_processor_id()) { + rb_update_pages(cpu_buffer); + migrate_enable(); + } else { + migrate_enable(); + schedule_work_on(cpu_id, + &cpu_buffer->update_pages_work); + wait_for_completion(&cpu_buffer->update_done); + } } cpu_buffer->nr_pages_to_update = 0;