Message ID | 166994751368.439920.3236636557520824664.stgit@devnote3 |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp611164wrr; Thu, 1 Dec 2022 18:51:52 -0800 (PST) X-Google-Smtp-Source: AA0mqf7I14hp2ZOGJ4Wms3DvSGHRKNtGms2PetEf72YtzPt+zrR3faBqmCRP2HpNd7Hdv6Vd7P09 X-Received: by 2002:a63:ef55:0:b0:45f:efc9:5935 with SMTP id c21-20020a63ef55000000b0045fefc95935mr60534143pgk.28.1669949512276; Thu, 01 Dec 2022 18:51:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669949512; cv=none; d=google.com; s=arc-20160816; b=IVz53GA6xusUorRD3ijYq6Xa8tF9YrmlvaA6zfp4+X8qNYHqspo0gt3e1R7OByAaQ3 tLBYQtMdpWxhE8Ebb7/gmLG0x00LEnlRjNclVp9kwRGfEZ/jS50dCsi+rED/1m7PSXSY GxFbyS2o8TCfn9O/wkDMFlA7Jw0etKFbMWZaGtuQaV8/0Pn3cfVHsy793Yffs0b0zq/4 ruxkmbB14+6Tx3ROb6bFj6szprTSk9x/KJshRMGGiiO53PpqidZE7z8emTwCVQyybA0R tKaeglDX5t2jyYc90ALTk4OaRRabOSIY1DQHy7doSiwIOj4I3V3+J/UyQ3zXgBXl9G1Z Sh2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=qYX+r2d0WxGL//1X7lUAmiNgw2r62OV5jIpXHiaa83A=; b=F7k3CKCeDGMEBl1jc4XYnh2DMSQFGRTI6WrpU+Tybuk163B1RScfKdpmi9PaA2BevT +IVi7BN4glj5Ra5rM72XrFa/kotDA8Sv5KeQxTkLQlDW1+mCT8jOciw7H/COQiviTL0Y C3VBTx4gFpTQ7lgX4ATbJe9O7WQFj5+mYw56tfxVnWWcKZe6u8NTAsZUk3yb4W9JPak8 RbEjVqJdm0sU9+GRiBSQfvA1c6IU95walw3aks1oSwGM3zEFYIqJjBiiqhqJE8oxzTvP Va6yuzk+hVUAVWz1JFsm2J/4CLEC6R609xSJD6a0+22gW7VEAsWpmWApHG1IJTST3xNG aPTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=nIcOtysO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r11-20020a056a00216b00b0057576b1c933si5835008pff.340.2022.12.01.18.51.38; Thu, 01 Dec 2022 18:51:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=nIcOtysO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231994AbiLBCSp (ORCPT <rfc822;heyuhang3455@gmail.com> + 99 others); Thu, 1 Dec 2022 21:18:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231990AbiLBCSj (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 1 Dec 2022 21:18:39 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7715ECAF96 for <linux-kernel@vger.kernel.org>; Thu, 1 Dec 2022 18:18:38 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EE6C9621EF for <linux-kernel@vger.kernel.org>; Fri, 2 Dec 2022 02:18:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6393C433D6; Fri, 2 Dec 2022 02:18:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669947517; bh=d/lXGG0C2aiD02jOJY4ptnK9ZuIklkzqbOFkMk8yr0Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nIcOtysO/1w3lc+kLQuqC0ib2g7qPgkTgrCEhLuYzBJpk/gtzaibZPU4g2BgqOhRf mDc07gKkTVFokUSPi/PCJBp6rerTt+MmmmR1Cx8dUD/qfKSdvUOikLBrtp/R/VtQid fRv9cybKiXw4Nfgjbb9hVipdUBiWS2kdrTtUPPkqo4KQ6lNl+aTRmHxt3KqAPMk6jN Mplg13O9zyrFyUNYQI3JiAJgntKbbRH+gZqNO/pEIMOZYvZRzUg497vpeVCxsUYn7i pUhJDFtfRX/5/DgCW7aGnfvPeaYG4ECIwh41hqpPObpsFZDK56a+XEZn0ipoHUNd8D sWxn78mF86hdQ== From: "Masami Hiramatsu (Google)" <mhiramat@kernel.org> To: Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com>, Mark Brown <broonie@kernel.org>, Kalesh Singh <kaleshsingh@google.com>, Masami Hiramatsu <mhiramat@kernel.org>, Marc Zyngier <maz@kernel.org>, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Subject: [PATCH v2 1/3] arm64: Prohibit instrumentation on arch_stack_walk() Date: Fri, 2 Dec 2022 11:18:33 +0900 Message-Id: <166994751368.439920.3236636557520824664.stgit@devnote3> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog In-Reply-To: <166994750386.439920.1754385804350980158.stgit@devnote3> References: <166994750386.439920.1754385804350980158.stgit@devnote3> User-Agent: StGit/0.19 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751068980122469601?= X-GMAIL-MSGID: =?utf-8?q?1751068980122469601?= |
Series |
arm64: kprobes: Fix bugs in kprobes for arm64
|
|
Commit Message
Masami Hiramatsu (Google)
Dec. 2, 2022, 2:18 a.m. UTC
From: Masami Hiramatsu (Google) <mhiramat@kernel.org> Mark arch_stack_walk() as noinstr instead of notrace and inline functions called from arch_stack_walk() as __always_inline so that user does not put any instrumentations on it, because this function can be used from return_address() which is used by lockdep. Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on arm64. # echo p arch_stack_walk >> ${TRACEFS}/kprobe_events # echo 1 > ${TRACEFS}/events/kprobes/enable kprobes: Failed to recover from reentered kprobes. kprobes: Dump kprobe: .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 ------------[ cut here ]------------ kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! kprobes: Failed to recover from reentered kprobes. kprobes: Dump kprobe: .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 ------------[ cut here ]------------ kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! PREEMPT SMP Modules linked in: CPU: 0 PID: 17 Comm: migration/0 Tainted: G N 6.1.0-rc5+ #6 Hardware name: linux,dummy-virt (DT) Stopper: 0x0 <- 0x0 pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : kprobe_breakpoint_handler+0x178/0x17c lr : kprobe_breakpoint_handler+0x178/0x17c sp : ffff8000080d3090 x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774 x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000 x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0 x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006 x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958 x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000 x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000 x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064 Call trace: kprobes: Failed to recover from reentered kprobes. kprobes: Dump kprobe: .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 ------------[ cut here ]------------ kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()") Cc: stable@vger.kernel.org Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> --- Changes in v2: - Use noinstr instead of NOKPROBE_SYMBOL() - Use __always_inline because nokprobe_inline will be changed if CONFIG_KPROBES=n. - Fix indentation. --- arch/arm64/kernel/stacktrace.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
Comments
On Fri, Dec 02, 2022 at 11:18:33AM +0900, Masami Hiramatsu (Google) wrote: > From: Masami Hiramatsu (Google) <mhiramat@kernel.org> > > Mark arch_stack_walk() as noinstr instead of notrace and inline functions > called from arch_stack_walk() as __always_inline so that user does not > put any instrumentations on it, because this function can be used from > return_address() which is used by lockdep. Hmmm... since arch_stack_walk is marked as notrace, that will be prohibited by default unless the kernel was built with CONFIG_KPROBE_EVENTS_ON_NOTRACE=y, and the commit message for that says: │ This option disables such protection and allows you to put kprobe │ │ events on ftrace functions for debugging ftrace by itself. │ │ Note that this might let you shoot yourself in the foot. ... and IIUC we generally don't expect people to set that, and that might be worth calling out in the commit message. > Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing > arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on > arm64. > > # echo p arch_stack_walk >> ${TRACEFS}/kprobe_events > # echo 1 > ${TRACEFS}/events/kprobes/enable I had a go at testing this patch, and it fixes the crash with the reproducer above, but there are plenty of other instances in stacktrace.c that lead to the same sort of crash, e.g. # echo p stackinfo_get_task >> ${TRACEFS}/kprobe_events # echo 1 > ${TRACEFS}/events/kprobes/enable ... so I think there's more that we need to do to fix this generally. Note: I found interesting functions to try tracing by looking at the output of: aarch64-linux-objdump -t arch/arm64/kernel/stacktrace.o | grep -w '.text' That all said, I think this patch is nice-to-have, and that we can address the other cases as a follow-up, so for this patch as-is (with or without some wording regarding CONFIG_KPROBE_EVENTS_ON_NOTRACE): Acked-by: Mark Rutland <mark.rutland@arm.com> Mark. > kprobes: Failed to recover from reentered kprobes. > kprobes: Dump kprobe: > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > ------------[ cut here ]------------ > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > kprobes: Failed to recover from reentered kprobes. > kprobes: Dump kprobe: > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > ------------[ cut here ]------------ > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > PREEMPT SMP > Modules linked in: > CPU: 0 PID: 17 Comm: migration/0 Tainted: G N 6.1.0-rc5+ #6 > Hardware name: linux,dummy-virt (DT) > Stopper: 0x0 <- 0x0 > pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) > pc : kprobe_breakpoint_handler+0x178/0x17c > lr : kprobe_breakpoint_handler+0x178/0x17c > sp : ffff8000080d3090 > x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774 > x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000 > x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0 > x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006 > x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d > x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b > x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958 > x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000 > x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000 > x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064 > Call trace: > kprobes: Failed to recover from reentered kprobes. > kprobes: Dump kprobe: > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > ------------[ cut here ]------------ > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > > Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()") > Cc: stable@vger.kernel.org > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> > --- > Changes in v2: > - Use noinstr instead of NOKPROBE_SYMBOL() > - Use __always_inline because nokprobe_inline will be changed if > CONFIG_KPROBES=n. > - Fix indentation. > --- > arch/arm64/kernel/stacktrace.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c > index 634279b3b03d..117e2c180f3c 100644 > --- a/arch/arm64/kernel/stacktrace.c > +++ b/arch/arm64/kernel/stacktrace.c > @@ -23,8 +23,8 @@ > * > * The regs must be on a stack currently owned by the calling task. > */ > -static inline void unwind_init_from_regs(struct unwind_state *state, > - struct pt_regs *regs) > +static __always_inline void unwind_init_from_regs(struct unwind_state *state, > + struct pt_regs *regs) > { > unwind_init_common(state, current); > > @@ -58,8 +58,8 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state) > * duration of the unwind, or the unwind will be bogus. It is never valid to > * call this for the current task. > */ > -static inline void unwind_init_from_task(struct unwind_state *state, > - struct task_struct *task) > +static __always_inline void unwind_init_from_task(struct unwind_state *state, > + struct task_struct *task) > { > unwind_init_common(state, task); > > @@ -186,7 +186,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) > : stackinfo_get_unknown(); \ > }) > > -noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, > +noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, > void *cookie, struct task_struct *task, > struct pt_regs *regs) > { >
On Fri, 2 Dec 2022 13:17:30 +0000 Mark Rutland <mark.rutland@arm.com> wrote: > On Fri, Dec 02, 2022 at 11:18:33AM +0900, Masami Hiramatsu (Google) wrote: > > From: Masami Hiramatsu (Google) <mhiramat@kernel.org> > > > > Mark arch_stack_walk() as noinstr instead of notrace and inline functions > > called from arch_stack_walk() as __always_inline so that user does not > > put any instrumentations on it, because this function can be used from > > return_address() which is used by lockdep. > > Hmmm... since arch_stack_walk is marked as notrace, that will be prohibited by > default unless the kernel was built with CONFIG_KPROBE_EVENTS_ON_NOTRACE=y, > and the commit message for that says: > > │ This option disables such protection and allows you to put kprobe │ > │ events on ftrace functions for debugging ftrace by itself. │ > │ Note that this might let you shoot yourself in the foot. > > ... and IIUC we generally don't expect people to set that, and that might be > worth calling out in the commit message. Yeah, but that is the feature for kprobe events, and not applied to the kprobes itself. Maybe I should use a test kprobe module to put a probe on it. > > > Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing > > arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on > > arm64. > > > > # echo p arch_stack_walk >> ${TRACEFS}/kprobe_events > > # echo 1 > ${TRACEFS}/events/kprobes/enable > > I had a go at testing this patch, and it fixes the crash with the reproducer > above, but there are plenty of other instances in stacktrace.c that lead to the > same sort of crash, e.g. > > # echo p stackinfo_get_task >> ${TRACEFS}/kprobe_events > # echo 1 > ${TRACEFS}/events/kprobes/enable Oops, thanks for pointing! Hmm, I thought stackinfo_get_task() is an inlined function usually. Maybe we should make it nokprobe_inline. But this is just one case. I need to scan all symbols to trace... > > ... so I think there's more that we need to do to fix this generally. > > Note: I found interesting functions to try tracing by looking at the output of: > > aarch64-linux-objdump -t arch/arm64/kernel/stacktrace.o | grep -w '.text' > > That all said, I think this patch is nice-to-have, and that we can address the > other cases as a follow-up, so for this patch as-is (with or without some > wording regarding CONFIG_KPROBE_EVENTS_ON_NOTRACE): > > Acked-by: Mark Rutland <mark.rutland@arm.com> Thanks! > > Mark. > > > kprobes: Failed to recover from reentered kprobes. > > kprobes: Dump kprobe: > > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > > ------------[ cut here ]------------ > > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > > kprobes: Failed to recover from reentered kprobes. > > kprobes: Dump kprobe: > > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > > ------------[ cut here ]------------ > > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > > PREEMPT SMP > > Modules linked in: > > CPU: 0 PID: 17 Comm: migration/0 Tainted: G N 6.1.0-rc5+ #6 > > Hardware name: linux,dummy-virt (DT) > > Stopper: 0x0 <- 0x0 > > pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) > > pc : kprobe_breakpoint_handler+0x178/0x17c > > lr : kprobe_breakpoint_handler+0x178/0x17c > > sp : ffff8000080d3090 > > x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774 > > x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000 > > x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0 > > x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006 > > x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d > > x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b > > x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958 > > x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000 > > x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000 > > x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064 > > Call trace: > > kprobes: Failed to recover from reentered kprobes. > > kprobes: Dump kprobe: > > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > > ------------[ cut here ]------------ > > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > > > > Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()") > > Cc: stable@vger.kernel.org > > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> > > --- > > Changes in v2: > > - Use noinstr instead of NOKPROBE_SYMBOL() > > - Use __always_inline because nokprobe_inline will be changed if > > CONFIG_KPROBES=n. > > - Fix indentation. > > --- > > arch/arm64/kernel/stacktrace.c | 10 +++++----- > > 1 file changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c > > index 634279b3b03d..117e2c180f3c 100644 > > --- a/arch/arm64/kernel/stacktrace.c > > +++ b/arch/arm64/kernel/stacktrace.c > > @@ -23,8 +23,8 @@ > > * > > * The regs must be on a stack currently owned by the calling task. > > */ > > -static inline void unwind_init_from_regs(struct unwind_state *state, > > - struct pt_regs *regs) > > +static __always_inline void unwind_init_from_regs(struct unwind_state *state, > > + struct pt_regs *regs) > > { > > unwind_init_common(state, current); > > > > @@ -58,8 +58,8 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state) > > * duration of the unwind, or the unwind will be bogus. It is never valid to > > * call this for the current task. > > */ > > -static inline void unwind_init_from_task(struct unwind_state *state, > > - struct task_struct *task) > > +static __always_inline void unwind_init_from_task(struct unwind_state *state, > > + struct task_struct *task) > > { > > unwind_init_common(state, task); > > > > @@ -186,7 +186,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) > > : stackinfo_get_unknown(); \ > > }) > > > > -noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, > > +noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, > > void *cookie, struct task_struct *task, > > struct pt_regs *regs) > > { > >
On Tue, Dec 06, 2022 at 09:41:37AM +0900, Masami Hiramatsu wrote: > On Fri, 2 Dec 2022 13:17:30 +0000 > Mark Rutland <mark.rutland@arm.com> wrote: > > I had a go at testing this patch, and it fixes the crash with the reproducer > > above, but there are plenty of other instances in stacktrace.c that lead to the > > same sort of crash, e.g. > > > > # echo p stackinfo_get_task >> ${TRACEFS}/kprobe_events > > # echo 1 > ${TRACEFS}/events/kprobes/enable > > Oops, thanks for pointing! Hmm, I thought stackinfo_get_task() is an > inlined function usually. Maybe we should make it nokprobe_inline. > But this is just one case. I need to scan all symbols to trace... FWIW, due to other instrumentation issues I've started doing a larger noinstr cleanup on arm64 which should address this, as fixing this properly involves also modifying a bunch of underlying helpers (e.g. preempt_count()). I can Cc you when sending that out, if you'd like? That'll probably be in the new year. Thanks, Mark.
On Fri, 9 Dec 2022 11:16:34 +0000 Mark Rutland <mark.rutland@arm.com> wrote: > On Tue, Dec 06, 2022 at 09:41:37AM +0900, Masami Hiramatsu wrote: > > On Fri, 2 Dec 2022 13:17:30 +0000 > > Mark Rutland <mark.rutland@arm.com> wrote: > > > I had a go at testing this patch, and it fixes the crash with the reproducer > > > above, but there are plenty of other instances in stacktrace.c that lead to the > > > same sort of crash, e.g. > > > > > > # echo p stackinfo_get_task >> ${TRACEFS}/kprobe_events > > > # echo 1 > ${TRACEFS}/events/kprobes/enable > > > > Oops, thanks for pointing! Hmm, I thought stackinfo_get_task() is an > > inlined function usually. Maybe we should make it nokprobe_inline. > > But this is just one case. I need to scan all symbols to trace... > > FWIW, due to other instrumentation issues I've started doing a larger noinstr > cleanup on arm64 which should address this, as fixing this properly involves > also modifying a bunch of underlying helpers (e.g. preempt_count()). > > I can Cc you when sending that out, if you'd like? That'll probably be in the > new year. Thanks Mark, yes, I'm interested in that work. Until that, I'll finish making a script for testing kprobes widely on kernel symbols. Thank you, > > Thanks, > Mark.
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 634279b3b03d..117e2c180f3c 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -23,8 +23,8 @@ * * The regs must be on a stack currently owned by the calling task. */ -static inline void unwind_init_from_regs(struct unwind_state *state, - struct pt_regs *regs) +static __always_inline void unwind_init_from_regs(struct unwind_state *state, + struct pt_regs *regs) { unwind_init_common(state, current); @@ -58,8 +58,8 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state) * duration of the unwind, or the unwind will be bogus. It is never valid to * call this for the current task. */ -static inline void unwind_init_from_task(struct unwind_state *state, - struct task_struct *task) +static __always_inline void unwind_init_from_task(struct unwind_state *state, + struct task_struct *task) { unwind_init_common(state, task); @@ -186,7 +186,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) : stackinfo_get_unknown(); \ }) -noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, +noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, struct task_struct *task, struct pt_regs *regs) {