Message ID | 20230612160748.4082850-1-svens@linux.ibm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2712790vqr; Mon, 12 Jun 2023 09:34:41 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4AjnkHtOhff/f+hMKlCjO9E/QUok7MLpC7CeyuaxOxpDKxrGy2/fqBLdLyIs2ZE/XU4h1b X-Received: by 2002:a17:906:478f:b0:96f:aadb:bf81 with SMTP id cw15-20020a170906478f00b0096faadbbf81mr11126244ejc.45.1686587680824; Mon, 12 Jun 2023 09:34:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686587680; cv=none; d=google.com; s=arc-20160816; b=U3LFZB57tYaNc0AKvZ1aPG028F1e7c6KOYvbuXHbJcN+dE4w9/XMzOehZBTNUbUllO RFFsUoNeM2hAk/LOmF1LmJeUbavEjj/bCKX/Yc3Kp600/9JOC2Gp135691PfoqPFx4ot d5vulDdrrvPa8gEjMcblGEsVtI9GomNoaxt/4h0HOTIh8Pwwa7hhEh2qIwcxJXmwlZpT y9rRKdbHOGF4skEzmLeA8fO46BARD7E8696vEvWo35HahJMCpZkLpo7/MUZIDynkBSf1 qsRPib3c5kxwTkHEpgOKOl4BVt58kfO70BA/oENi8khgz+OiYtiOiBeb0WpbzkfkIlHj esmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=5LHI5t0StJcJrbHOtOpufQOCNbXC/Bb5sjF/Qx21CiQ=; b=mo+jVseHFpJ57LWhA/jYHlt9d4xdBYwUPm5YkWDgVPsUK3bJZhmn6YT+tpXl17TAfr oQ310OZwjKNvV0MOeg2HXquUnb1Z4pVmbGZUDtD40fDjHrfvQPRqpH0a+8CwS9/t5+uW zJpxLPpk2Vey8rz2rVnRAdKu67TaCX3AWeoEem317/W4828sdLriHHcyIlhr3+2RNeBp uC9RV719/jxDHDj54O3bYS2fnxas/GWxx4+Cfd1Y3hkTOVUSLE5nkPjwLjhhDgpkswWG cd00IqW6m4134uIPGSPqwexiZr4ODRcdGlZ2808A1t0m6q024x53HdRvn/btDuaMOeTk Xi8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=toe3rVYt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x24-20020a170906711800b00965ff5ddac8si5181090ejj.682.2023.06.12.09.34.15; Mon, 12 Jun 2023 09:34:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=toe3rVYt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232254AbjFLQH7 (ORCPT <rfc822;rust.linux@gmail.com> + 99 others); Mon, 12 Jun 2023 12:07:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229450AbjFLQH4 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 12 Jun 2023 12:07:56 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95E64187 for <linux-kernel@vger.kernel.org>; Mon, 12 Jun 2023 09:07:55 -0700 (PDT) Received: from pps.filterd (m0353727.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35CFprtR010773; Mon, 12 Jun 2023 16:07:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=5LHI5t0StJcJrbHOtOpufQOCNbXC/Bb5sjF/Qx21CiQ=; b=toe3rVYt7CvozLKS4tCQ9kMNnEBbuqwyt142XMc9+OftOfB4ZrNV/olWoCLU4D2IJc5l +56DU7CymIcA5qiDA1IAjB9X1E3Y4HWT7WFNN0BSlnsJMMPTBBJBJUGqQcrBQ4WldfcJ s6dQF3SWBi1FMKwzLJ18AT9sOZ3PtWTazDlfkKH9swbr00wH2Q8GQ4JceV0VsQFk86Il nsaE8q9hN0HaQgwZPsxRk+7LuFUTyoZ7lYYE3ViUm0k2x/xrp+je0Qyi9YX+FLR5sKjA zM+InFVFUungmByZT5+EDcncyR2R/mfO//UPFOUgT9mBGa7EEoGe/LoiVYhtB3Q+h2rg bQ== Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3r66cn0gyq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 12 Jun 2023 16:07:53 +0000 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 35C1mpvR017394; Mon, 12 Jun 2023 16:07:51 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma06ams.nl.ibm.com (PPS) with ESMTPS id 3r4gee1kh4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 12 Jun 2023 16:07:51 +0000 Received: from smtpav05.fra02v.mail.ibm.com (smtpav05.fra02v.mail.ibm.com [10.20.54.104]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 35CG7nRO21365484 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 12 Jun 2023 16:07:49 GMT Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 42A562004F; Mon, 12 Jun 2023 16:07:49 +0000 (GMT) Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 32BB02004E; Mon, 12 Jun 2023 16:07:49 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav05.fra02v.mail.ibm.com (Postfix) with ESMTPS; Mon, 12 Jun 2023 16:07:49 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 55390) id E8A17E12CB; Mon, 12 Jun 2023 18:07:48 +0200 (CEST) From: Sven Schnelle <svens@linux.ibm.com> To: Steven Rostedt <rostedt@goodmis.org> Cc: linux-kernel@vger.kernel.org Subject: [PATCH] tracing: fix memcpy size when copying stack entries Date: Mon, 12 Jun 2023 18:07:48 +0200 Message-Id: <20230612160748.4082850-1-svens@linux.ibm.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-GUID: -wj8A83xApT9KRWtwZj54Y5oaG279GPw X-Proofpoint-ORIG-GUID: -wj8A83xApT9KRWtwZj54Y5oaG279GPw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-06-12_06,2023-06-09_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 clxscore=1015 mlxlogscore=848 impostorscore=0 malwarescore=0 priorityscore=1501 spamscore=0 adultscore=0 suspectscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2305260000 definitions=main-2306120138 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768515364477109753?= X-GMAIL-MSGID: =?utf-8?q?1768515364477109753?= |
Series |
tracing: fix memcpy size when copying stack entries
|
|
Commit Message
Sven Schnelle
June 12, 2023, 4:07 p.m. UTC
Noticed the following warning during boot:
[ 2.316341] Testing tracer wakeup:
[ 2.383512] ------------[ cut here ]------------
[ 2.383517] memcpy: detected field-spanning write (size 104) of single field "&entry->caller" at kernel/trace/trace.c:3167 (size 64)
The reason seems to be that the maximum number of entries is calculated
from the size of the fstack->calls array which is 128. But later the same
size is used to memcpy() the entries to entry->callers, which has only
room for eight elements. Therefore use the minimum of both arrays as limit.
Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
---
kernel/trace/trace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
On Mon, 12 Jun 2023 18:07:48 +0200 Sven Schnelle <svens@linux.ibm.com> wrote: > Noticed the following warning during boot: > > [ 2.316341] Testing tracer wakeup: > [ 2.383512] ------------[ cut here ]------------ > [ 2.383517] memcpy: detected field-spanning write (size 104) of single field "&entry->caller" at kernel/trace/trace.c:3167 (size 64) > > The reason seems to be that the maximum number of entries is calculated > from the size of the fstack->calls array which is 128. But later the same > size is used to memcpy() the entries to entry->callers, which has only > room for eight elements. Therefore use the minimum of both arrays as limit. > > Signed-off-by: Sven Schnelle <svens@linux.ibm.com> > --- > kernel/trace/trace.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c > index 64a4dde073ef..988d664c13ec 100644 > --- a/kernel/trace/trace.c > +++ b/kernel/trace/trace.c > @@ -3146,7 +3146,7 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, > barrier(); > > fstack = this_cpu_ptr(ftrace_stacks.stacks) + stackidx; > - size = ARRAY_SIZE(fstack->calls); > + size = min(ARRAY_SIZE(entry->caller), ARRAY_SIZE(fstack->calls)); No, this is not how it works, and this breaks the stack tracing code. > > if (regs) { > nr_entries = stack_trace_save_regs(regs, fstack->calls, I guess we need to add some type of annotation to make the memcpy() checking happy. Let me explain what is happening. By default the stack trace has a minimum of 8 entries (defined by struct stack_entry, which is used to show to user space the default size - for backward compatibility). Let's take a look at the code in more detail: /* What is the size of the temp buffer to use to find the stack? */ size = ARRAY_SIZE(fstack->calls); if (regs) { /* Fills in the stack into the temp buffer */ nr_entries = stack_trace_save_regs(regs, fstack->calls, size, skip); } else { /* Also fills in the stack into the temp buffer */ nr_entries = stack_trace_save(fstack->calls, size, skip); } /* Calculate the size from the number of entries stored in the temp buffer */ size = nr_entries * sizeof(unsigned long); /* Now reserve space on the ring buffer */ event = __trace_buffer_lock_reserve(buffer, TRACE_STACK, /* * Notice how it calculates the size! It subtracts the sizeof * entry->caller and then adds size again! */ (sizeof(*entry) - sizeof(entry->caller)) + size, trace_ctx); if (!event) goto out; /* Point entry to the ring buffer data */ entry = ring_buffer_event_data(event); /* Now copy the stack to the location for the data on the ftrace ring buffer */ memcpy(&entry->caller, fstack->calls, size); entry->size = nr_entries; The old way use to just record the 8 entries, but that was not very useful in real world analysis. Your patch takes that away. Might as well just record directly into the ring buffer again like it use to. Yes the above may be special, but your patch breaks it. NAK on the patch, but I'm willing to update this to make your tooling handle this special case. -- Steve
Steven Rostedt <rostedt@goodmis.org> writes: > On Mon, 12 Jun 2023 18:07:48 +0200 > Sven Schnelle <svens@linux.ibm.com> wrote: > >> Noticed the following warning during boot: >> >> [ 2.316341] Testing tracer wakeup: >> [ 2.383512] ------------[ cut here ]------------ >> [ 2.383517] memcpy: detected field-spanning write (size 104) of single field "&entry->caller" at kernel/trace/trace.c:3167 (size 64) >> >> The reason seems to be that the maximum number of entries is calculated >> from the size of the fstack->calls array which is 128. But later the same >> size is used to memcpy() the entries to entry->callers, which has only >> room for eight elements. Therefore use the minimum of both arrays as limit. >> >> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> >> --- >> kernel/trace/trace.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c >> index 64a4dde073ef..988d664c13ec 100644 >> --- a/kernel/trace/trace.c >> +++ b/kernel/trace/trace.c >> @@ -3146,7 +3146,7 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, >> barrier(); >> >> fstack = this_cpu_ptr(ftrace_stacks.stacks) + stackidx; >> - size = ARRAY_SIZE(fstack->calls); >> + size = min(ARRAY_SIZE(entry->caller), ARRAY_SIZE(fstack->calls)); > > No, this is not how it works, and this breaks the stack tracing code. > [..] > The old way use to just record the 8 entries, but that was not very useful > in real world analysis. Your patch takes that away. Might as well just > record directly into the ring buffer again like it use to. > > Yes the above may be special, but your patch breaks it. Indeed, i'm feeling a bit stupid for sending that patch, should have used my brain during reading the source. Thanks for the explanation.
On Tue, 13 Jun 2023 07:19:14 +0200 Sven Schnelle <svens@linux.ibm.com> wrote: > > Yes the above may be special, but your patch breaks it. > > Indeed, i'm feeling a bit stupid for sending that patch, should have > used my brain during reading the source. Thanks for the explanation. Does this quiet the fortifier? -- Steve diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 64a4dde073ef..1bac7df1f4b6 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3118,6 +3118,7 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, struct ftrace_stack *fstack; struct stack_entry *entry; int stackidx; + void *stack; /* * Add one, for this function and the call to save_stack_trace() @@ -3163,7 +3164,18 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, goto out; entry = ring_buffer_event_data(event); - memcpy(&entry->caller, fstack->calls, size); + /* + * For backward compatibility reasons, the entry->caller is an + * array of 8 slots to store the stack. This is also exported + * to user space. The amount allocated on the ring buffer actually + * holds enough for the stack specified by nr_entries. This will + * go into the location of entry->caller. Due to string fortifiers + * checking the size of the destination of memcpy() it triggers + * when it detects that size is greater than 8. To hide this from + * the fortifiers, use a different pointer "stack". + */ + stack = (void *)&entry->caller; + memcpy(stack, fstack->calls, size); entry->size = nr_entries; if (!call_filter_check_discard(call, entry, buffer, event))
Steven Rostedt <rostedt@goodmis.org> writes: > On Tue, 13 Jun 2023 07:19:14 +0200 > Sven Schnelle <svens@linux.ibm.com> wrote: > >> > Yes the above may be special, but your patch breaks it. >> >> Indeed, i'm feeling a bit stupid for sending that patch, should have >> used my brain during reading the source. Thanks for the explanation. > > Does this quiet the fortifier? > [..] No, still getting the same warning: [ 2.302776] memcpy: detected field-spanning write (size 104) of single field "stack" at kernel/trace/trace.c:3178 (size 64)
From: Sven Schnelle > Sent: 14 June 2023 11:41 > > Steven Rostedt <rostedt@goodmis.org> writes: > > > On Tue, 13 Jun 2023 07:19:14 +0200 > > Sven Schnelle <svens@linux.ibm.com> wrote: > > > >> > Yes the above may be special, but your patch breaks it. > >> > >> Indeed, i'm feeling a bit stupid for sending that patch, should have > >> used my brain during reading the source. Thanks for the explanation. > > > > Does this quiet the fortifier? > > [..] > > No, still getting the same warning: > > [ 2.302776] memcpy: detected field-spanning write (size 104) of single field "stack" at > kernel/trace/trace.c:3178 (size 64) What about: (memcpy)(......) Or maybe: (__builtin_memcpy)(....) David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
Hi Steven, Sven Schnelle <svens@linux.ibm.com> writes: > Steven Rostedt <rostedt@goodmis.org> writes: > >> On Tue, 13 Jun 2023 07:19:14 +0200 >> Sven Schnelle <svens@linux.ibm.com> wrote: >> >>> > Yes the above may be special, but your patch breaks it. >>> >>> Indeed, i'm feeling a bit stupid for sending that patch, should have >>> used my brain during reading the source. Thanks for the explanation. >> >> Does this quiet the fortifier? >> [..] > > No, still getting the same warning: > > [ 2.302776] memcpy: detected field-spanning write (size 104) of single field "stack" at kernel/trace/trace.c:3178 (size 64) BTW, i'm seeing the same error on x86 with current master when CONFIG_FORTIFY_SOURCE=y and CONFIG_SCHED_TRACER=y: [ 3.089395] Testing tracer wakeup: [ 3.205602] ------------[ cut here ]------------ [ 3.205958] memcpy: detected field-spanning write (size 112) of single field "&entry->caller" at kernel/trace/trace.c:3173 (size 64) [ 3.205958] WARNING: CPU: 1 PID: 0 at kernel/trace/trace.c:3173 __ftrace_trace_stack+0x1d1/0x1e0 [ 3.205958] Modules linked in: [ 3.205958] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.5.0-rc1-00012-g77341f6d2110-dirty #50 [ 3.205958] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-1.fc38 04/01/2014 [ 3.205958] RIP: 0010:__ftrace_trace_stack+0x1d1/0x1e0 [ 3.205958] Code: ff ff ff b9 40 00 00 00 4c 89 f6 48 c7 c2 d8 d3 9a 82 48 c7 c7 e8 82 99 82 48 89 44 24 08 c6 05 9d 8c 30 02 01 e8 0f 88 ed ff <0f> 0b 48 8b 44 24 08 e9 f4 fe ff ff 0f 1f 00 90 90 90 90 90 90 90 [ 3.205958] RSP: 0000:ffffc90000100ee0 EFLAGS: 00010086 [ 3.205958] RAX: 0000000000000000 RBX: ffff8881003db034 RCX: c0000000ffffdfff [ 3.205958] RDX: 0000000000000000 RSI: 00000000ffffdfff RDI: 0000000000000001 [ 3.205958] RBP: ffff8881003db03c R08: 0000000000000000 R09: ffffc90000100d88 [ 3.205958] R10: 0000000000000003 R11: ffffffff83343008 R12: ffff88810007a100 [ 3.205958] R13: 000000000000000e R14: 0000000000000070 R15: 0000000000000070 [ 3.205958] FS: 0000000000000000(0000) GS:ffff88817bc40000(0000) knlGS:0000000000000000 [ 3.205958] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 3.205958] CR2: 0000000000000000 CR3: 000000000322e000 CR4: 00000000000006e0 [ 3.205958] Call Trace: [ 3.205958] <IRQ> [ 3.205958] ? __ftrace_trace_stack+0x1d1/0x1e0 [ 3.205958] ? __warn+0x81/0x130 [ 3.205958] ? __ftrace_trace_stack+0x1d1/0x1e0 [ 3.205958] ? report_bug+0x171/0x1a0 [ 3.205958] ? handle_bug+0x3a/0x70 [ 3.205958] ? exc_invalid_op+0x17/0x70 [ 3.205958] ? asm_exc_invalid_op+0x1a/0x20 [ 3.205958] ? __ftrace_trace_stack+0x1d1/0x1e0 [ 3.205958] probe_wakeup+0x28e/0x340 [ 3.205958] ttwu_do_activate.isra.0+0x132/0x190 [ 3.205958] sched_ttwu_pending+0x97/0x110 [ 3.205958] __flush_smp_call_function_queue+0x131/0x400 [ 3.205958] __sysvec_call_function_single+0x2d/0xd0 [ 3.205958] sysvec_call_function_single+0x65/0x80 [ 3.205958] </IRQ> [ 3.205958] <TASK> [ 3.205958] asm_sysvec_call_function_single+0x1a/0x20 [ 3.205958] RIP: 0010:default_idle+0xf/0x20 [ 3.205958] Code: 4c 01 c7 4c 29 c2 e9 72 ff ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa eb 07 0f 00 2d 43 5f 31 00 fb f4 <fa> c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90
On Wed, 12 Jul 2023 16:06:27 +0200 Sven Schnelle <svens@linux.ibm.com> wrote: > > No, still getting the same warning: > > > > [ 2.302776] memcpy: detected field-spanning write (size 104) of single field "stack" at kernel/trace/trace.c:3178 (size 64) > > BTW, i'm seeing the same error on x86 with current master when > CONFIG_FORTIFY_SOURCE=y and CONFIG_SCHED_TRACER=y: As I don't know how the fortifier works, nor what exactly it is checking, do you have any idea on how to quiet it? This is a false positive, as I described before. -- Steve
On Wed, 12 Jul 2023 10:14:34 -0400 Steven Rostedt <rostedt@goodmis.org> wrote: > On Wed, 12 Jul 2023 16:06:27 +0200 > Sven Schnelle <svens@linux.ibm.com> wrote: > > > > No, still getting the same warning: > > > > > > [ 2.302776] memcpy: detected field-spanning write (size 104) of single field "stack" at kernel/trace/trace.c:3178 (size 64) > > > > BTW, i'm seeing the same error on x86 with current master when > > CONFIG_FORTIFY_SOURCE=y and CONFIG_SCHED_TRACER=y: > > As I don't know how the fortifier works, nor what exactly it is checking, > do you have any idea on how to quiet it? > > This is a false positive, as I described before. Hmm, maybe this would work? diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 4529e264cb86..20122eeccf97 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3118,6 +3118,7 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, struct ftrace_stack *fstack; struct stack_entry *entry; int stackidx; + void *ptr; /* * Add one, for this function and the call to save_stack_trace() @@ -3161,9 +3162,25 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, trace_ctx); if (!event) goto out; - entry = ring_buffer_event_data(event); + ptr = ring_buffer_event_data(event); + entry = ptr; + + /* + * For backward compatibility reasons, the entry->caller is an + * array of 8 slots to store the stack. This is also exported + * to user space. The amount allocated on the ring buffer actually + * holds enough for the stack specified by nr_entries. This will + * go into the location of entry->caller. Due to string fortifiers + * checking the size of the destination of memcpy() it triggers + * when it detects that size is greater than 8. To hide this from + * the fortifiers, we use "ptr" and pointer arithmetic to assign caller. + * + * The below is really just: + * memcpy(&entry->caller, fstack->calls, size); + */ + ptr += offsetof(typeof(*entry), caller); + memcpy(ptr, fstack->calls, size); - memcpy(&entry->caller, fstack->calls, size); entry->size = nr_entries; if (!call_filter_check_discard(call, entry, buffer, event)) -- Steve
Hi Steven, Steven Rostedt <rostedt@goodmis.org> writes: > On Wed, 12 Jul 2023 16:06:27 +0200 > Sven Schnelle <svens@linux.ibm.com> wrote: > >> > No, still getting the same warning: >> > >> > [ 2.302776] memcpy: detected field-spanning write (size 104) of single field "stack" at kernel/trace/trace.c:3178 (size 64) >> >> BTW, i'm seeing the same error on x86 with current master when >> CONFIG_FORTIFY_SOURCE=y and CONFIG_SCHED_TRACER=y: > > As I don't know how the fortifier works, nor what exactly it is checking, > do you have any idea on how to quiet it? > > This is a false positive, as I described before. The "problem" is that struct stack_entry is struct stack_entry { int size; unsigned long caller[8]; }; So, as you explained, the ringbuffer code allocates some space after the struct for additional entries: struct stack_entry 1; <additional space for 1> struct stack_entry 2; <additional space for 2> ... But the struct member that is passed to memcpy still has the type information 'caller is an array with 8 members of 8 bytes', so memcpy fortify complains. I'm not sure whether we can blame the compiler or the fortify code here. One (ugly and whitespace damaged) workaround is: diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 35b11f5a9519..31acd8a6b97e 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3170,7 +3170,8 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, goto out; entry = ring_buffer_event_data(event); - memcpy(&entry->caller, fstack->calls, size); + void *p = entry + offsetof(struct stack_entry, caller); + memcpy(p, fstack->calls, size); entry->size = nr_entries; if (!call_filter_check_discard(call, entry, buffer, event)) So with that offsetof calculation the compiler doesn't know about the 8 entries * 8 bytes limitation. Adding Kees to the thread, maybe he knows some way.
Hi Steven, Steven Rostedt <rostedt@goodmis.org> writes: >> As I don't know how the fortifier works, nor what exactly it is checking, >> do you have any idea on how to quiet it? >> >> This is a false positive, as I described before. > > > Hmm, maybe this would work? > > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c > index 4529e264cb86..20122eeccf97 100644 > --- a/kernel/trace/trace.c > +++ b/kernel/trace/trace.c > @@ -3118,6 +3118,7 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, > struct ftrace_stack *fstack; > struct stack_entry *entry; > int stackidx; > + void *ptr; > > /* > * Add one, for this function and the call to save_stack_trace() > @@ -3161,9 +3162,25 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, > trace_ctx); > if (!event) > goto out; > - entry = ring_buffer_event_data(event); > + ptr = ring_buffer_event_data(event); > + entry = ptr; > + > + /* > + * For backward compatibility reasons, the entry->caller is an > + * array of 8 slots to store the stack. This is also exported > + * to user space. The amount allocated on the ring buffer actually > + * holds enough for the stack specified by nr_entries. This will > + * go into the location of entry->caller. Due to string fortifiers > + * checking the size of the destination of memcpy() it triggers > + * when it detects that size is greater than 8. To hide this from > + * the fortifiers, we use "ptr" and pointer arithmetic to assign caller. > + * > + * The below is really just: > + * memcpy(&entry->caller, fstack->calls, size); > + */ > + ptr += offsetof(typeof(*entry), caller); > + memcpy(ptr, fstack->calls, size); > > - memcpy(&entry->caller, fstack->calls, size); > entry->size = nr_entries; > > if (!call_filter_check_discard(call, entry, buffer, event)) > > I just sent about the same thing without the nice comment. So yes, this works. :-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 64a4dde073ef..988d664c13ec 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3146,7 +3146,7 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer, barrier(); fstack = this_cpu_ptr(ftrace_stacks.stacks) + stackidx; - size = ARRAY_SIZE(fstack->calls); + size = min(ARRAY_SIZE(entry->caller), ARRAY_SIZE(fstack->calls)); if (regs) { nr_entries = stack_trace_save_regs(regs, fstack->calls,