Message ID | 20240101130828.3666251-1-harshit.m.mogalapalli@oracle.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-13867-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp3955679dyb; Mon, 1 Jan 2024 05:09:02 -0800 (PST) X-Google-Smtp-Source: AGHT+IFzkFzxS8m/66LqWFn73lizryXmBZmQEI5SU4R+5akSTUSJCxao9tY64UBCiQXlCP4Y/z6I X-Received: by 2002:a17:907:36cc:b0:a27:f0da:f6d8 with SMTP id bj12-20020a17090736cc00b00a27f0daf6d8mr912929ejc.104.1704114541928; Mon, 01 Jan 2024 05:09:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704114541; cv=none; d=google.com; s=arc-20160816; b=CXPXxFvLe/mDuUSWlRncr+7zK5SgdYLdbAMsKT1+nCAV8qSVviJF/pOEtOOBmmBiKc 70vWUGoFrPdjnoPyfuNlBJ3I1sZyZ8wv7aUhBx2Ef/3tC8d+X7QECP1YFU3q7qZ36YU4 swU7bE7n//aGJmQYxmVzgaCDjEYRAQ9mmfYiB3FkDPxLm+ASFtKgVPqd2B2D9RJzHvFg BZJG6OqJy+wQe+W19KV0CFcZYb7WDA2rdi0iY/W1F0xcfVfzVVgfztSwy0HLlY0gN+Va TFzHRc9onrvMqgG1jI0hoBo74QSaOCOdaVCdhEAHpU98WsYZjXU+pdGfhtOasPdMOZBQ OOrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=sWE8iLC8lQnGrkSUfaQxhl7Q+roRwXRIvpOVMo+x0us=; fh=dB1Z6TTX5VyN1swoQRJYJwFsvRqWjWxbMShYKT/2yH0=; b=K+G7px+5+1xAlDagQrU24pbvNchDQ41DHjZJ2l0RlnO4TrVZVpoto2n2XQOKFmkYe0 nJYBFaP9hs3xJiUWthA2nvYLyIWmo69i3wL1BzS7svkGvD+j/HmuxnYffHdi9KRqx8ju Xi3ktxYBf89lI7BIWGB/i0brzH1IpACCmU2yxlYLQpjce3DWHj2cE7YtX00283IRi/zJ eI2m/0BT3rYzfrCmMzo4li1e/IO81agxtyvjAslhBOf9jhPeYwCxRMdldd1j5lKdJCKt uU1mUKNKGGRgUPV6/uerbTMC2zRHhY8x1DXOXiGyneEiGk/+yD9TElTG8etu3apQQiLL tsAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-11-20 header.b=oh7rrX92; spf=pass (google.com: domain of linux-kernel+bounces-13867-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-13867-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id zu22-20020a17090708d600b00a26e48fd0dbsi6890072ejb.577.2024.01.01.05.09.01 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jan 2024 05:09:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-13867-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-11-20 header.b=oh7rrX92; spf=pass (google.com: domain of linux-kernel+bounces-13867-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-13867-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 87F1F1F2181F for <ouuuleilei@gmail.com>; Mon, 1 Jan 2024 13:09:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 46B6B53A6; Mon, 1 Jan 2024 13:08:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="oh7rrX92" X-Original-To: linux-kernel@vger.kernel.org Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC2733C28; Mon, 1 Jan 2024 13:08:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Received: from pps.filterd (m0333520.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 4015K1mf024521; Mon, 1 Jan 2024 13:08:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=corp-2023-11-20; bh=sWE8iLC8lQnGrkSUfaQxhl7Q+roRwXRIvpOVMo+x0us=; b=oh7rrX92Hyzbuw8n6gJsQU9rRtH+fM/4jL5xXqWMs5kbQfngIpoCJvF8IIeJyIlzygLW /4qf9rsxioZU2EPGWYm1gw1O84IxL2sE2IBeWQozOs9LNrs5Sn+vgs3JjmI6MEzaSE9B 1FM5eLTwouJnss71GANUFqHuZWVoVDs/CyiLDnDiat/uSFK9NvWn/ohFsjKLIpArO+sy eEX94JNElwc1UcQOjIqBTsnxKy3I/2+I9Bbmq01EkEUjj38DOJZZrAmFUZ9km/N2qZEX dnwuu6Zi3zVkZdz6UdhNjOTWNt0lRLrxx/q56EWHVOgVjIwLJ7cSzymErY5j8BPfVpnG rg== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3vab8d1swt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 01 Jan 2024 13:08:35 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 401C47FD012962; Mon, 1 Jan 2024 13:08:34 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3va9nbtct2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 01 Jan 2024 13:08:34 +0000 Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 401D8Y8V032534; Mon, 1 Jan 2024 13:08:34 GMT Received: from ca-dev112.us.oracle.com (ca-dev112.us.oracle.com [10.129.136.47]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3va9nbtcsa-1; Mon, 01 Jan 2024 13:08:33 +0000 From: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com> To: linux-hardening@vger.kernel.org, keescook@chromium.org, gustavoars@kernel.org, Bryan Tan <bryantan@vmware.com>, Vishnu Dasa <vdasa@vmware.com>, VMware PV-Drivers Reviewers <pv-drivers@vmware.com>, Arnd Bergmann <arnd@arndb.de>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, linux-kernel@vger.kernel.org Cc: vegard.nossum@oracle.com, darren.kenny@oracle.com, harshit.m.mogalapalli@oracle.com, syzkaller <syzkaller@googlegroups.com> Subject: [RFC PATCH] VMCI: Silence memcpy() run-time false positive warning Date: Mon, 1 Jan 2024 05:08:28 -0800 Message-ID: <20240101130828.3666251-1-harshit.m.mogalapalli@oracle.com> X-Mailer: git-send-email 2.42.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-01-01_06,2024-01-01_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 malwarescore=0 mlxscore=0 bulkscore=0 adultscore=0 suspectscore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2401010106 X-Proofpoint-GUID: 2YvmTSpATis-NlN9V21Xe4_8cqzf_oaP X-Proofpoint-ORIG-GUID: 2YvmTSpATis-NlN9V21Xe4_8cqzf_oaP X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786893609850095205 X-GMAIL-MSGID: 1786893609850095205 |
Series |
[RFC] VMCI: Silence memcpy() run-time false positive warning
|
|
Commit Message
Harshit Mogalapalli
Jan. 1, 2024, 1:08 p.m. UTC
Syzkaller hit 'WARNING in dg_dispatch_as_host' bug.
memcpy: detected field-spanning write (size 56) of single field "&dg_info->msg"
at drivers/misc/vmw_vmci/vmci_datagram.c:237 (size 24)
WARNING: CPU: 0 PID: 1555 at drivers/misc/vmw_vmci/vmci_datagram.c:237
dg_dispatch_as_host+0x88e/0xa60 drivers/misc/vmw_vmci/vmci_datagram.c:237
Some code commentry, based on my understanding:
544 #define VMCI_DG_SIZE(_dg) (VMCI_DG_HEADERSIZE + (size_t)(_dg)->payload_size)
/// This is 24 + payload_size
memcpy(&dg_info->msg, dg, dg_size);
Destination = dg_info->msg ---> this is a 24 byte
structure(struct vmci_datagram)
Source = dg --> this is a 24 byte structure (struct vmci_datagram)
Size = dg_size = 24 + payload_size
{payload_size = 56-24 =32} -- Syzkaller managed to set payload_size to 32.
35 struct delayed_datagram_info {
36 struct datagram_entry *entry;
37 struct work_struct work;
38 bool in_dg_host_queue;
39 /* msg and msg_payload must be together. */
40 struct vmci_datagram msg;
41 u8 msg_payload[];
42 };
So those extra bytes of payload are copied into msg_payload[], so there
is no bug, but a run time warning is seen while fuzzing with Syzkaller.
One possible way to silence the warning is to split the memcpy() into
two parts -- one -- copying the msg and second taking care of payload.
Reported-by: syzkaller <syzkaller@googlegroups.com>
Suggested-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
---
This patch is only tested with the C reproducer, not any testing
specific to driver is done.
---
drivers/misc/vmw_vmci/vmci_datagram.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
Comments
On Mon, Jan 01, 2024 at 05:08:28AM -0800, Harshit Mogalapalli wrote: > Syzkaller hit 'WARNING in dg_dispatch_as_host' bug. > > memcpy: detected field-spanning write (size 56) of single field "&dg_info->msg" > at drivers/misc/vmw_vmci/vmci_datagram.c:237 (size 24) > > WARNING: CPU: 0 PID: 1555 at drivers/misc/vmw_vmci/vmci_datagram.c:237 > dg_dispatch_as_host+0x88e/0xa60 drivers/misc/vmw_vmci/vmci_datagram.c:237 > > Some code commentry, based on my understanding: > > 544 #define VMCI_DG_SIZE(_dg) (VMCI_DG_HEADERSIZE + (size_t)(_dg)->payload_size) > /// This is 24 + payload_size > > memcpy(&dg_info->msg, dg, dg_size); > Destination = dg_info->msg ---> this is a 24 byte > structure(struct vmci_datagram) > Source = dg --> this is a 24 byte structure (struct vmci_datagram) > Size = dg_size = 24 + payload_size > > > {payload_size = 56-24 =32} -- Syzkaller managed to set payload_size to 32. > > 35 struct delayed_datagram_info { > 36 struct datagram_entry *entry; > 37 struct work_struct work; > 38 bool in_dg_host_queue; > 39 /* msg and msg_payload must be together. */ > 40 struct vmci_datagram msg; > 41 u8 msg_payload[]; > 42 }; > > So those extra bytes of payload are copied into msg_payload[], so there > is no bug, but a run time warning is seen while fuzzing with Syzkaller. > > One possible way to silence the warning is to split the memcpy() into > two parts -- one -- copying the msg and second taking care of payload. And what are the performance impacts of this? thanks, greg k-h
On 1/1/24 07:08, Harshit Mogalapalli wrote: > Syzkaller hit 'WARNING in dg_dispatch_as_host' bug. > > memcpy: detected field-spanning write (size 56) of single field "&dg_info->msg" > at drivers/misc/vmw_vmci/vmci_datagram.c:237 (size 24) This is not a 'false postive warning.' This is a legitimately warning coming from the fortified memcpy(). Under FORTIFY_SOURCE we should not copy data across multiple members in a structure. For that we alternatives like struct_group(), or as in this case, splitting memcpy(), or as I suggest below, a mix of direct assignment and memcpy(). > > WARNING: CPU: 0 PID: 1555 at drivers/misc/vmw_vmci/vmci_datagram.c:237 > dg_dispatch_as_host+0x88e/0xa60 drivers/misc/vmw_vmci/vmci_datagram.c:237 > > Some code commentry, based on my understanding: > > 544 #define VMCI_DG_SIZE(_dg) (VMCI_DG_HEADERSIZE + (size_t)(_dg)->payload_size) > /// This is 24 + payload_size > > memcpy(&dg_info->msg, dg, dg_size); > Destination = dg_info->msg ---> this is a 24 byte > structure(struct vmci_datagram) > Source = dg --> this is a 24 byte structure (struct vmci_datagram) > Size = dg_size = 24 + payload_size > > > {payload_size = 56-24 =32} -- Syzkaller managed to set payload_size to 32. > > 35 struct delayed_datagram_info { > 36 struct datagram_entry *entry; > 37 struct work_struct work; > 38 bool in_dg_host_queue; > 39 /* msg and msg_payload must be together. */ > 40 struct vmci_datagram msg; > 41 u8 msg_payload[]; > 42 }; > > So those extra bytes of payload are copied into msg_payload[], so there > is no bug, but a run time warning is seen while fuzzing with Syzkaller. > > One possible way to silence the warning is to split the memcpy() into > two parts -- one -- copying the msg and second taking care of payload. > > Reported-by: syzkaller <syzkaller@googlegroups.com> > Suggested-by: Vegard Nossum <vegard.nossum@oracle.com> > Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com> > --- > This patch is only tested with the C reproducer, not any testing > specific to driver is done. > --- > drivers/misc/vmw_vmci/vmci_datagram.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/drivers/misc/vmw_vmci/vmci_datagram.c b/drivers/misc/vmw_vmci/vmci_datagram.c > index f50d22882476..b43661590f56 100644 > --- a/drivers/misc/vmw_vmci/vmci_datagram.c > +++ b/drivers/misc/vmw_vmci/vmci_datagram.c > @@ -216,6 +216,7 @@ static int dg_dispatch_as_host(u32 context_id, struct vmci_datagram *dg) > if (dst_entry->run_delayed || > dg->src.context == VMCI_HOST_CONTEXT_ID) { > struct delayed_datagram_info *dg_info; > + size_t payload_size = dg_size - VMCI_DG_HEADERSIZE; This seems to be the same as `dg->payload_size`, so I don't think a new variable is necessary. > > if (atomic_add_return(1, &delayed_dg_host_queue_size) > == VMCI_MAX_DELAYED_DG_HOST_QUEUE_SIZE) { > @@ -234,7 +235,8 @@ static int dg_dispatch_as_host(u32 context_id, struct vmci_datagram *dg) > > dg_info->in_dg_host_queue = true; > dg_info->entry = dst_entry; > - memcpy(&dg_info->msg, dg, dg_size); > + memcpy(&dg_info->msg, dg, VMCI_DG_HEADERSIZE); > + memcpy(&dg_info->msg_payload, dg + 1, payload_size); I think a direct assignment and a call to memcpy() is better in this case, something like this: dg_info->msg = *dg; memcpy(&dg_info->msg_payload, dg + 1, dg->payload_size); However, that `dg + 1` thing is making my eyes twitch. Where exactly are we making sure that `dg` actually points to an area in memory bigger than `sizeof(*dg)`?... Also, we could also use struct_size() during allocation, some lines above: - dg_info = kmalloc(sizeof(*dg_info) + - (size_t) dg->payload_size, GFP_ATOMIC); + dg_info = kmalloc(struct_size(dg_info, msg_payload, dg->payload_size), + GFP_ATOMIC); -- Gustavo > > INIT_WORK(&dg_info->work, dg_delayed_dispatch); > schedule_work(&dg_info->work);
Hi Greg, On 01/01/24 7:25 pm, Greg Kroah-Hartman wrote: > On Mon, Jan 01, 2024 at 05:08:28AM -0800, Harshit Mogalapalli wrote: >> Syzkaller hit 'WARNING in dg_dispatch_as_host' bug. >> >> memcpy: detected field-spanning write (size 56) of single field "&dg_info->msg" >> at drivers/misc/vmw_vmci/vmci_datagram.c:237 (size 24) >> >> WARNING: CPU: 0 PID: 1555 at drivers/misc/vmw_vmci/vmci_datagram.c:237 >> dg_dispatch_as_host+0x88e/0xa60 drivers/misc/vmw_vmci/vmci_datagram.c:237 >> >> Some code commentry, based on my understanding: >> >> 544 #define VMCI_DG_SIZE(_dg) (VMCI_DG_HEADERSIZE + (size_t)(_dg)->payload_size) >> /// This is 24 + payload_size >> >> memcpy(&dg_info->msg, dg, dg_size); >> Destination = dg_info->msg ---> this is a 24 byte >> structure(struct vmci_datagram) >> Source = dg --> this is a 24 byte structure (struct vmci_datagram) >> Size = dg_size = 24 + payload_size >> >> >> {payload_size = 56-24 =32} -- Syzkaller managed to set payload_size to 32. >> >> 35 struct delayed_datagram_info { >> 36 struct datagram_entry *entry; >> 37 struct work_struct work; >> 38 bool in_dg_host_queue; >> 39 /* msg and msg_payload must be together. */ >> 40 struct vmci_datagram msg; >> 41 u8 msg_payload[]; >> 42 }; >> >> So those extra bytes of payload are copied into msg_payload[], so there >> is no bug, but a run time warning is seen while fuzzing with Syzkaller. >> >> One possible way to silence the warning is to split the memcpy() into >> two parts -- one -- copying the msg and second taking care of payload. > > And what are the performance impacts of this? > I haven't done any performance tests on this. I tried to look at the diff in assembly code but couldn't comment on performance from that. Also, gustavo suggested to do this: instead of two memcpy()'s; a direct assignment and memcpy() for the payload part. Is there a way to do perf analysis based on code without access to hardware? Thanks, Harshit > thanks, > > greg k-h
Hi Gustavo, On 01/01/24 11:13 pm, Gustavo A. R. Silva wrote: > > > On 1/1/24 07:08, Harshit Mogalapalli wrote: >> Syzkaller hit 'WARNING in dg_dispatch_as_host' bug. >> >> memcpy: detected field-spanning write (size 56) of single field >> "&dg_info->msg" >> at drivers/misc/vmw_vmci/vmci_datagram.c:237 (size 24) > > This is not a 'false postive warning.' This is a legitimately warning > coming from the fortified memcpy(). > > Under FORTIFY_SOURCE we should not copy data across multiple members > in a structure. For that we alternatives like struct_group(), or as > in this case, splitting memcpy(), or as I suggest below, a mix of > direct assignment and memcpy(). > Thanks for sharing this. > >> >> struct vmci_datagram *dg) >> if (dst_entry->run_delayed || >> dg->src.context == VMCI_HOST_CONTEXT_ID) { >> struct delayed_datagram_info *dg_info; >> + size_t payload_size = dg_size - VMCI_DG_HEADERSIZE; > > This seems to be the same as `dg->payload_size`, so I don't think a new > variable is necessary. > Oh right, this is unnecessary. I will remove it. >> if (atomic_add_return(1, &delayed_dg_host_queue_size) >> == VMCI_MAX_DELAYED_DG_HOST_QUEUE_SIZE) { >> @@ -234,7 +235,8 @@ static int dg_dispatch_as_host(u32 context_id, >> struct vmci_datagram *dg) >> dg_info->in_dg_host_queue = true; >> dg_info->entry = dst_entry; >> - memcpy(&dg_info->msg, dg, dg_size); >> + memcpy(&dg_info->msg, dg, VMCI_DG_HEADERSIZE); >> + memcpy(&dg_info->msg_payload, dg + 1, payload_size); > > I think a direct assignment and a call to memcpy() is better in this case, > something like this: > > dg_info->msg = *dg; > memcpy(&dg_info->msg_payload, dg + 1, dg->payload_size); > > However, that `dg + 1` thing is making my eyes twitch. Where exactly are we > making sure that `dg` actually points to an area in memory bigger than > `sizeof(*dg)`?... > Going up on the call tree: -> vmci_transport_dgram_enqueue() --> vmci_datagram_send() ---> vmci_datagram_dispatch() ----> dg_dispatch_as_host() 1694 static int vmci_transport_dgram_enqueue( 1695 struct vsock_sock *vsk, 1696 struct sockaddr_vm *remote_addr, 1697 struct msghdr *msg, 1698 size_t len) 1699 { 1700 int err; 1701 struct vmci_datagram *dg; 1702 1703 if (len > VMCI_MAX_DG_PAYLOAD_SIZE) 1704 return -EMSGSIZE; 1705 1706 if (!vmci_transport_allow_dgram(vsk, remote_addr->svm_cid)) 1707 return -EPERM; 1708 1709 /* Allocate a buffer for the user's message and our packet header. */ 1710 dg = kmalloc(len + sizeof(*dg), GFP_KERNEL); 1711 if (!dg) 1712 return -ENOMEM; ^^^ dg = kmalloc(len + sizeof(*dg), GFP_KERNEL); I think from this we can say allocated memory for dg is bigger than sizeof(*dg). > Also, we could also use struct_size() during allocation, some lines above: > > - dg_info = kmalloc(sizeof(*dg_info) + > - (size_t) dg->payload_size, GFP_ATOMIC); > + dg_info = kmalloc(struct_size(dg_info, > msg_payload, dg->payload_size), > + GFP_ATOMIC); > Thanks again for the suggestion. I still couldn't figure out the performance comparison before and after patch. Once I have some reasoning, I will include the above changes and send a V2. Thanks, Harshit > -- > Gustavo > >> INIT_WORK(&dg_info->work, dg_delayed_dispatch); >> schedule_work(&dg_info->work);
On 01/01/2024 14:55, Greg Kroah-Hartman wrote: > On Mon, Jan 01, 2024 at 05:08:28AM -0800, Harshit Mogalapalli wrote: >> One possible way to silence the warning is to split the memcpy() into >> two parts -- one -- copying the msg and second taking care of payload. > > And what are the performance impacts of this? I did a disasssembly diff for the version of the patch that uses dg->payload_size directly in the second memcpy and I get this as the only change: @@ -419,11 +419,16 @@ mov %rax,%rbx test %rax,%rax je + mov 0x0(%rbp),%rdx mov %r14,(%rax) - mov %r13,%rdx - mov %rbp,%rsi - lea 0x30(%rax),%rdi + lea 0x18(%rbp),%rsi + lea 0x48(%rax),%rdi movb $0x1,0x28(%rax) + mov %rdx,0x30(%rax) + mov 0x8(%rbp),%rdx + mov %rdx,0x38(%rax) + mov 0x10(%rbp),%rdx + mov %rdx,0x40(%rax) call mov 0x0(%rip),%rsi # lea 0x8(%rbx),%rdx Basically, I believe it's inlining the first constant-size memcpy and keeping the second one as a call. Overall, the number of memory accesses should be the same. The biggest impact that I can see is therefore the code size (which isn't much). There is also a kmalloc() on the same code path that I assume would dwarf any performance impact from this patch -- but happy to be corrected. Vegard
On 1/4/24 12:31, Vegard Nossum wrote: > > On 01/01/2024 14:55, Greg Kroah-Hartman wrote: >> On Mon, Jan 01, 2024 at 05:08:28AM -0800, Harshit Mogalapalli wrote: >>> One possible way to silence the warning is to split the memcpy() into >>> two parts -- one -- copying the msg and second taking care of payload. >> >> And what are the performance impacts of this? > > I did a disasssembly diff for the version of the patch that uses > dg->payload_size directly in the second memcpy and I get this as the > only change: > > @@ -419,11 +419,16 @@ > mov %rax,%rbx > test %rax,%rax > je > + mov 0x0(%rbp),%rdx > mov %r14,(%rax) > - mov %r13,%rdx > - mov %rbp,%rsi > - lea 0x30(%rax),%rdi > + lea 0x18(%rbp),%rsi > + lea 0x48(%rax),%rdi > movb $0x1,0x28(%rax) > + mov %rdx,0x30(%rax) > + mov 0x8(%rbp),%rdx > + mov %rdx,0x38(%rax) > + mov 0x10(%rbp),%rdx > + mov %rdx,0x40(%rax) > call > mov 0x0(%rip),%rsi # > lea 0x8(%rbx),%rdx > > Basically, I believe it's inlining the first constant-size memcpy and > keeping the second one as a call. > > Overall, the number of memory accesses should be the same. > > The biggest impact that I can see is therefore the code size (which > isn't much). Yep, I don't think this is a problem. I look forward to reviewing v2 of this patch. Thanks -- Gustavo > > There is also a kmalloc() on the same code path that I assume would > dwarf any performance impact from this patch -- but happy to be corrected. > > > Vegard >
diff --git a/drivers/misc/vmw_vmci/vmci_datagram.c b/drivers/misc/vmw_vmci/vmci_datagram.c index f50d22882476..b43661590f56 100644 --- a/drivers/misc/vmw_vmci/vmci_datagram.c +++ b/drivers/misc/vmw_vmci/vmci_datagram.c @@ -216,6 +216,7 @@ static int dg_dispatch_as_host(u32 context_id, struct vmci_datagram *dg) if (dst_entry->run_delayed || dg->src.context == VMCI_HOST_CONTEXT_ID) { struct delayed_datagram_info *dg_info; + size_t payload_size = dg_size - VMCI_DG_HEADERSIZE; if (atomic_add_return(1, &delayed_dg_host_queue_size) == VMCI_MAX_DELAYED_DG_HOST_QUEUE_SIZE) { @@ -234,7 +235,8 @@ static int dg_dispatch_as_host(u32 context_id, struct vmci_datagram *dg) dg_info->in_dg_host_queue = true; dg_info->entry = dst_entry; - memcpy(&dg_info->msg, dg, dg_size); + memcpy(&dg_info->msg, dg, VMCI_DG_HEADERSIZE); + memcpy(&dg_info->msg_payload, dg + 1, payload_size); INIT_WORK(&dg_info->work, dg_delayed_dispatch); schedule_work(&dg_info->work);