From patchwork Tue Oct 18 09:33:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 4054 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1860475wrs; Tue, 18 Oct 2022 02:35:10 -0700 (PDT) X-Google-Smtp-Source: AMsMyM54qeUOLENHKuO/A0OAtj5moR1OiuLNa6CmgmKh5Iv8RNuymeFOTpGqfuLwIwwHwvM1y1nk X-Received: by 2002:a05:6402:448:b0:45c:8de5:4fc with SMTP id p8-20020a056402044800b0045c8de504fcmr1762428edw.133.1666085709852; Tue, 18 Oct 2022 02:35:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666085709; cv=none; d=google.com; s=arc-20160816; b=wBxCzPy9cMNxKBUNIIbckvGOBFG/6PHCqJjhUabGkjfEQwvK8aEzPwNkC9i8IhPRYr 3g5ltFrK0GsA7RfHBeg6PBPC7/kLXNyavNIMPTBV0u0n61kP0fz7V+zKecFY36UyXQ8z qslRiJQyTwmtj92lqU9V3JhWiX1KejJ3lricDGr5J/REYQmY2ioHB1AipT+HwJ8Mji4z /q4bZdQiyZXI0BHWJvQ8YAz67wGYgkLz4u3PXo5F+ZNZbr92D98RqAqGKGKAushLDKGR cU239bn2rqeMzkfKav/JksupS8QDkseAP2gzfjY/9QC+iTZTZe+4ayqkoK/uT2JUEwYo h5LA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=Vcm2DjcFdVdA2XmTeYXfvoBfujRgUbAVY112h3yQ01E=; b=etsk0s1yvPA/9yNzo3f5bn0aOU4KEUaJPYpu8pMKnqyvuyrB51NOElvKqHPe47NGvv cdTytISbgHTnR36v4E4aAKnd2Dm9ZV80Oqb0lv9jlvP3gVA/Ct+v8z3AkoGzIQ87p8uU pse0YHOJaokkf37ESKVSge+K9PjhGG0R6qAA0BBTsAUu3YhIfdqOzTq3cuCG2cM3XOBI gBDIawLdTz3pRaMPDShf3seQsayaYsF6bPoQ9zerxts2Jc3Y4Javnf8HNhcP9SmVb7J3 eM7jryuEbfSaYDJ4uW3aBLHkw8zSi4YbIP95WWmu7ZCqKegWhHa9rGUVZKVQJYd7Leo7 HuJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b="Rf81/UGr"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gs17-20020a1709072d1100b007807e1f3d9dsi15271579ejc.842.2022.10.18.02.34.42; Tue, 18 Oct 2022 02:35:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b="Rf81/UGr"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229918AbiJRJdo (ORCPT + 99 others); Tue, 18 Oct 2022 05:33:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229903AbiJRJdm (ORCPT ); Tue, 18 Oct 2022 05:33:42 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9866AE22D for ; Tue, 18 Oct 2022 02:33:40 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id fw14so13458028pjb.3 for ; Tue, 18 Oct 2022 02:33:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=Vcm2DjcFdVdA2XmTeYXfvoBfujRgUbAVY112h3yQ01E=; b=Rf81/UGr3jsRCHiHZ4O2uCDtkwk3ZukD6i1ETq7uVpiDB3hv5Pg0SQPB+lcGsWZDXa bX1VCwnGbAuImcrTwkezVKVo+AzGEDVcs2xS9joui3Zb1W1q4E/V9BvIb+BZbC1gHkI1 9S9DlbNlB1xiEjiGvlmr9o5KXPNso43opj0Cg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Vcm2DjcFdVdA2XmTeYXfvoBfujRgUbAVY112h3yQ01E=; b=KsvEaxuL3v7OXjnwkatThfYhZretQxiAkVFcc/V82kBSAfq1JPFQvwcit841khSsT0 I708NDVtoTKEVZ160a0Qm+YZaFFLkZgx13iAhv7dMV4ujmY2/PY+kntDlAALSVCfVqf7 csyIDW2FcTpk+I/uo0KiYqrWWFU+Etn4dtVJBV/Qhxb3Ag3xz2p4L+efpgfEbn3PZWK8 TulX/rbgMlU2f72h2yNqBK4C7uFEjQOvDHU6Wz1cCXGP6N27bCMzQAogFdMv0nOmY2jL cOCUDg/FQwWu+0Vy4Ai0JoKSLyDUgYno9KeNctcdAG01wifEBciYTfgwLh+1ryw38Q1O EN3A== X-Gm-Message-State: ACrzQf19+S8Pyz93UZPxv7hZ4GIUXWFK8O5Ub9obzMgryvmVSG9F5AA4 Ak4IXHZNOiXeISxlu4nLH2Py9Q== X-Received: by 2002:a17:902:724c:b0:177:5080:cbeb with SMTP id c12-20020a170902724c00b001775080cbebmr2294093pll.67.1666085620323; Tue, 18 Oct 2022 02:33:40 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id o9-20020a170903210900b00179c9219195sm8192637ple.16.2022.10.18.02.33.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Oct 2022 02:33:39 -0700 (PDT) From: Kees Cook To: "David S. Miller" Cc: Kees Cook , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Greg Kroah-Hartman , Nick Desaulniers , David Rientjes , Vlastimil Babka , Pavel Begunkov , Menglong Dong , linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3][next] skbuff: Proactively round up to kmalloc bucket size Date: Tue, 18 Oct 2022 02:33:36 -0700 Message-Id: <20221018093005.give.246-kees@kernel.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5848; h=from:subject:message-id; bh=pkLoivXaq3CVJLhapnRLgcHPxB9L7soYhnHwJEdUlPc=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBjTnLvD8DVluil9nQfvRUSQMN+hxeTSt8QuPl3dr4u gC7wBFaJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCY05y7wAKCRCJcvTf3G3AJrQKEA CXK8cN3LQuRpfc2qi+jIv4AZtTeh+TEquNKS03J+jyWEQMX2TfMqJEBtXdWx8o60qualisfkZTw2dY QPfnJO5jb9mHv6kYBWvrBx19X+wNA4AQNJjDF8oJclIwC8wicuY1Be0Pv0ObTPKKsfn7vroHgK+e+C tKAt5ZGF033olGPFQdZQnx/RKiP1i5XySSGr0DjOSn5H+BjA2ALBPe9omF6NhqsiDEHkjUSCpWpGNr IEMdF0VrB5QNU/SacfDGnSjn4B9r94dF6tqUcO0vm62Va2MnPYMhiI0mssF1Eim4bNgrNTKRFv5VHg y2a2jFqmfLo3HPGO0uSTa4IMIONZAg376fbjHBkHwDtWJsyaSxjXQShTatXtSHOOSi4gDB4Cahvzs7 pOm6d/BZu5ubZBNMq7gyZrUYYHxrfI4ATFpd7WPoMa4i1ra4ZiE4niMHnK4AQAn9hIhmTOMcEeokUh yz3wU5NACqG32ulU633t6rQMLR0dw/53ldBXPRlWB4OEgr7qK+9bTog24H4yM0AkgInaCOJ1tVuoQM 0+lqCUvjwkKC6vZje+Xh3ag8SXfeyne/gaoxV0+mOG0/AINZecQ44rBNkryrzz/PDwxqkeVzjrWO50 jBk3cWIUCieUvThMEIW6ZjRAe9Q+RRrHsJOMEv6ab4HrLMOQRT375cQHq8bA== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747017489494573370?= X-GMAIL-MSGID: =?utf-8?q?1747017489494573370?= Instead of discovering the kmalloc bucket size _after_ allocation, round up proactively so the allocation is explicitly made for the full size, allowing the compiler to correctly reason about the resulting size of the buffer through the existing __alloc_size() hint. This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain back the __alloc_size() hints that were temporarily reverted in commit 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller") Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: netdev@vger.kernel.org Cc: Greg Kroah-Hartman Cc: Nick Desaulniers Cc: David Rientjes Cc: Vlastimil Babka Signed-off-by: Kees Cook --- v3: refactor again to pass allocation size more cleanly to callers v2: https://lore.kernel.org/lkml/20220923202822.2667581-4-keescook@chromium.org/ --- net/core/skbuff.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 1d9719e72f9d..3ea1032d03ec 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -425,11 +425,12 @@ EXPORT_SYMBOL(napi_build_skb); * memory is free */ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, - bool *pfmemalloc) + bool *pfmemalloc, size_t *alloc_size) { void *obj; bool ret_pfmemalloc = false; + size = kmalloc_size_roundup(size); /* * Try a regular allocation, when that fails and we're not entitled * to the reserves, fail. @@ -448,6 +449,7 @@ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, if (pfmemalloc) *pfmemalloc = ret_pfmemalloc; + *alloc_size = size; return obj; } @@ -479,7 +481,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, { struct kmem_cache *cache; struct sk_buff *skb; - unsigned int osize; + size_t alloc_size; bool pfmemalloc; u8 *data; @@ -506,15 +508,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, */ size = SKB_DATA_ALIGN(size); size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); - if (unlikely(!data)) - goto nodata; - /* kmalloc(size) might give us more room than requested. + /* kmalloc(size) might give us more room than requested, so + * allocate the true bucket size up front. * Put skb_shared_info exactly at the end of allocated zone, * to allow max possible filling before reallocation. */ - osize = ksize(data); - size = SKB_WITH_OVERHEAD(osize); + data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc, &alloc_size); + if (unlikely(!data)) + goto nodata; + size = SKB_WITH_OVERHEAD(alloc_size); prefetchw(data + size); /* @@ -523,7 +525,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * the tail pointer in struct sk_buff! */ memset(skb, 0, offsetof(struct sk_buff, tail)); - __build_skb_around(skb, data, osize); + __build_skb_around(skb, data, alloc_size); skb->pfmemalloc = pfmemalloc; if (flags & SKB_ALLOC_FCLONE) { @@ -1816,6 +1818,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, { int i, osize = skb_end_offset(skb); int size = osize + nhead + ntail; + size_t alloc_size; long off; u8 *data; @@ -1830,10 +1833,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) goto nodata; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy only real data... and, alas, header. This should be * optimized for the cases when header is void. @@ -6169,19 +6172,19 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off, int i; int size = skb_end_offset(skb); int new_hlen = headlen - off; + size_t alloc_size; u8 *data; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy real data, and all frags */ skb_copy_from_linear_data_offset(skb, off, data, new_hlen); @@ -6290,18 +6293,18 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, u8 *data; const int nfrags = skb_shinfo(skb)->nr_frags; struct skb_shared_info *shinfo; + size_t alloc_size; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); memcpy((struct skb_shared_info *)(data + size), skb_shinfo(skb), offsetof(struct skb_shared_info, frags[0]));