From patchwork Thu Oct 5 19:35:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Kanner X-Patchwork-Id: 149009 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2016:b0:403:3b70:6f57 with SMTP id fe22csp529556vqb; Thu, 5 Oct 2023 12:49:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFRpiv3kv1FkAoMvDgFdMoOcs5FDR1KFxyOqEJ4NIFbCEtOL4UBtt0Tm41hd8AFPGyyAui1 X-Received: by 2002:a17:90b:1d04:b0:268:808:8e82 with SMTP id on4-20020a17090b1d0400b0026808088e82mr5536056pjb.1.1696535392882; Thu, 05 Oct 2023 12:49:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696535392; cv=none; d=google.com; s=arc-20160816; b=S/VDLh/q8v7F7bk80yuxQLxMZCMKvEvpLXYUlObPMoh2GaRnt50A4UHPR9hkuIuXpL pTYuBG3y+q/5kLS+QdDm23A0PwnuaG9q66yz7dT0zCQL8WRShXasb3id8rOkRU7I6joc Ot4m1UqYuCebJNZXSboG7fRL5K17RDn27gmQLF+YvRtQnigjY1XxR2hsHfesEKvut1jG 6KThs6Pyb7aWGsTV5KIDhyIZp3lPNxNyCOwd075leSsROreV2j49u4vPfP8+ymXy03xl QNKkGkYibaDQvR7XS/cvBZFbyEZeheHO0EEvnBybCOZW3aqn6Ao7XhXD/COq8s74y3+7 UvxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=qPYaRKIDdTM5S6kpvnsOytPvCHdetsXKEErzIEZmPrQ=; fh=F9aFiP1vapNM0spf4T5W4m8Yuu7sTukP1pQao5FzRC8=; b=BafZajarRHj8gVS7r4QTdXJQFYzL6EqOAAFLh7kJKzFOLaT24l5wT/3hbbqwDs4UIv YaCziM8fy7is3OiTR7TdK5L8fm8hIOMcRRZ/ni0EuJzEc1PIme2/gZvVgVu8z1DWcPRx ta4c4qUVmqnnk2hLumocVYtPLVVsEJR+lRoKPgtnApYuUbqgqBdDXqenG6vxmpeXME49 HffhcPICsQhiyMMQh4q6v+rmVGcWfDzvo7ZabVF5NPnRQYW2+J5SorB45Coh8Bn+jzqh Ibu0X0m8ICj7pSN/e/jity7hnQaDD5SWayRZXdHh9XTC0Acxw5JvNlpGVsTb6gddH4X7 ARfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=T4BlTaBF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id ng12-20020a17090b1a8c00b0027761cbb47dsi2347192pjb.49.2023.10.05.12.49.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Oct 2023 12:49:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=T4BlTaBF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 3F2918427919; Thu, 5 Oct 2023 12:49:50 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231777AbjJETte (ORCPT + 18 others); Thu, 5 Oct 2023 15:49:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231416AbjJETtc (ORCPT ); Thu, 5 Oct 2023 15:49:32 -0400 Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com [IPv6:2a00:1450:4864:20::131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0E42F9; Thu, 5 Oct 2023 12:49:30 -0700 (PDT) Received: by mail-lf1-x131.google.com with SMTP id 2adb3069b0e04-50307acd445so1751927e87.0; Thu, 05 Oct 2023 12:49:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1696535369; x=1697140169; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=qPYaRKIDdTM5S6kpvnsOytPvCHdetsXKEErzIEZmPrQ=; b=T4BlTaBFwK7y3+VYAETVgSaBeoWkM+4e4bRMbJ9gINri8r9ldWnJCMxmHPtwBJeUdj gLH2LTvAUO8ccvXRnH1zQFwrqWgA4/558CKYkK/ZG/XJeh/EF5F77kALamf7t61fvm2B 98YySvFSxtskCfupBke7nAUcw3zBYSMLpRsUGUJ0xIF85XIFktfCUm212BB7ptcCMSlV iWsQxdHbnv5p8lhX+FQwxPSmLF9bHKh+fkYuhmmm2dUpMx82Q+tZWdj4URwq1bAwfTTI Cth96ipNh7So36Q5luKTWjQ/pI2Be/X/JkSsKmGmg5tMV1gLZ/2rixf58ceRgm4pPxv5 3Zmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696535369; x=1697140169; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qPYaRKIDdTM5S6kpvnsOytPvCHdetsXKEErzIEZmPrQ=; b=t2LMJxEYi44G7e54b7VNil2sjVOfW7eM75awU6Z0WBoAV54OAtzuY8h3tTJxn3htZn KwCg6iCfH8fOPPMYhfr7XOujwvaOzL2OBn1ip/gZXK7h0hCSIkPBG6gp3FYWPeg/Bu4L QzVtArKcoZOjCKivrDssRAf/+ijygOiZW2+bYyD+U4/BK5Z0QR7NV6MyOAEi2KA9dyER U044LoBERLi5sbpjs15DAo/Tu4BL3gtN1rQ6FIXXEhhciuZXFAO1NVcXQxBwHtdA0N0p uO9aUpznkTx92dbz2jOVxrzp4QozHHiW1reeDxbsyheKxY8yFFgvDi4J9y0jdsgufSNZ tzyA== X-Gm-Message-State: AOJu0YxBRLeDPy/pvf9UPNFS74KddezP5Y0wdhRegALE4TjXuWMb4aO6 yFiSSSmkdVcInzxVchmhKlg= X-Received: by 2002:a19:6518:0:b0:500:acf1:b42f with SMTP id z24-20020a196518000000b00500acf1b42fmr4489663lfb.53.1696535368815; Thu, 05 Oct 2023 12:49:28 -0700 (PDT) Received: from localhost.localdomain ([77.222.24.57]) by smtp.gmail.com with ESMTPSA id x12-20020a19f60c000000b004fe28e3841bsm415655lfe.267.2023.10.05.12.49.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Oct 2023 12:49:28 -0700 (PDT) From: Andrew Kanner To: bjorn@kernel.org, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, jonathan.lemon@gmail.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, aleksander.lobakin@intel.com, xuanzhuo@linux.alibaba.com, ast@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, daniel@iogearbox.net Cc: linux-kernel-mentees@lists.linuxfoundation.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+fae676d3cf469331fc89@syzkaller.appspotmail.com, syzbot+b132693e925cbbd89e26@syzkaller.appspotmail.com, Andrew Kanner Subject: [PATCH bpf v3] net/xdp: fix zero-size allocation warning in xskq_create() Date: Thu, 5 Oct 2023 22:35:49 +0300 Message-Id: <20231005193548.515-1-andrew.kanner@gmail.com> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Thu, 05 Oct 2023 12:49:50 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778354444438361425 X-GMAIL-MSGID: 1778946296297593978 Syzkaller reported the following issue: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 2807 at mm/vmalloc.c:3247 __vmalloc_node_range (mm/vmalloc.c:3361) Modules linked in: CPU: 0 PID: 2807 Comm: repro Not tainted 6.6.0-rc2+ #12 Hardware name: Generic DT based system unwind_backtrace from show_stack (arch/arm/kernel/traps.c:258) show_stack from dump_stack_lvl (lib/dump_stack.c:107 (discriminator 1)) dump_stack_lvl from __warn (kernel/panic.c:633 kernel/panic.c:680) __warn from warn_slowpath_fmt (./include/linux/context_tracking.h:153 kernel/panic.c:700) warn_slowpath_fmt from __vmalloc_node_range (mm/vmalloc.c:3361 (discriminator 3)) __vmalloc_node_range from vmalloc_user (mm/vmalloc.c:3478) vmalloc_user from xskq_create (net/xdp/xsk_queue.c:40) xskq_create from xsk_setsockopt (net/xdp/xsk.c:953 net/xdp/xsk.c:1286) xsk_setsockopt from __sys_setsockopt (net/socket.c:2308) __sys_setsockopt from ret_fast_syscall (arch/arm/kernel/entry-common.S:68) xskq_get_ring_size() uses struct_size() macro to safely calculate the size of struct xsk_queue and q->nentries of desc members. But the syzkaller repro was able to set q->nentries with the value initially taken from copy_from_sockptr() high enough to return SIZE_MAX by struct_size(). The next PAGE_ALIGN(size) is such case will overflow the size_t value and set it to 0. This will trigger WARN_ON_ONCE in vmalloc_user() -> __vmalloc_node_range(). The issue is reproducible on 32-bit arm kernel. Reported-and-tested-by: syzbot+fae676d3cf469331fc89@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000c84b4705fb31741e@google.com/T/ Link: https://syzkaller.appspot.com/bug?extid=fae676d3cf469331fc89 Reported-by: syzbot+b132693e925cbbd89e26@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000e20df20606ebab4f@google.com/T/ Fixes: 9f78bf330a66 ("xsk: support use vaddr as ring") Signed-off-by: Andrew Kanner --- Notes (akanner): v3: - free kzalloc-ed memory before return, the leak was noticed by Daniel Borkmann v2: https://lore.kernel.org/all/20231002222939.1519-1-andrew.kanner@gmail.com/raw - use unlikely() optimization for the case with SIZE_MAX return from struct_size(), suggested by Alexander Lobakin - cc-ed 4 more maintainers, mentioned by cc_maintainers patchwork test v1: https://lore.kernel.org/all/20230928204440.543-1-andrew.kanner@gmail.com/T/ - RFC notes: It was found that net/xdp/xsk.c:xsk_setsockopt() uses copy_from_sockptr() to get the number of entries (int) for cases with XDP_RX_RING / XDP_TX_RING and XDP_UMEM_FILL_RING / XDP_UMEM_COMPLETION_RING. Next in xsk_init_queue() there're 2 sanity checks (entries == 0) and (!is_power_of_2(entries)) for which -EINVAL will be returned. After that net/xdp/xsk_queue.c:xskq_create() will calculate the size multipling the number of entries (int) with the size of u64, at least. I wonder if there should be the upper bound (e.g. the 3rd sanity check inside xsk_init_queue()). It seems that without the upper limit it's quiet easy to overflow the allocated size (SIZE_MAX), especially for 32-bit architectures, for example arm nodes which were used by the syzkaller. In this patch I added a naive check for SIZE_MAX which helped to skip zero-size allocation after overflow, but maybe it's not quite right. Please, suggest if you have any thoughts about the appropriate limit for the size of these xdp rings. PS: the initial number of entries is 0x20000000 in syzkaller repro: syscall(__NR_setsockopt, (intptr_t)r[0], 0x11b, 3, 0x20000040, 0x20); Link: https://syzkaller.appspot.com/text?tag=ReproC&x=10910f18280000 net/xdp/xsk_queue.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/xdp/xsk_queue.c b/net/xdp/xsk_queue.c index f8905400ee07..c7e8bbb12752 100644 --- a/net/xdp/xsk_queue.c +++ b/net/xdp/xsk_queue.c @@ -34,6 +34,11 @@ struct xsk_queue *xskq_create(u32 nentries, bool umem_queue) q->ring_mask = nentries - 1; size = xskq_get_ring_size(q, umem_queue); + if (unlikely(size == SIZE_MAX)) { + kfree(q); + return NULL; + } + size = PAGE_ALIGN(size); q->ring = vmalloc_user(size);