Message ID | 20230417074016.3920-2-dinghui@sangfor.com.cn |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1956459vqo; Mon, 17 Apr 2023 01:09:46 -0700 (PDT) X-Google-Smtp-Source: AKy350bDedAZluG9lnH6kYnAS04urnVXOoBDt4e6HuDCjwx6FOpQbr8yv0LquYZSisxPJWhiFLgt X-Received: by 2002:a17:90a:4406:b0:23d:41:3582 with SMTP id s6-20020a17090a440600b0023d00413582mr13827245pjg.8.1681718986458; Mon, 17 Apr 2023 01:09:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681718986; cv=none; d=google.com; s=arc-20160816; b=VqQiZxc97pxuCo+EUyYFx0/ffUIlU+jJ236y7ffzgxTGPEtUlQVzIieyw/f5kndrwM JSDEb6OCqan7PdeimuZh1p6er3EQpf8tlgfV0QBXMBL46nwgugZZ8mszhMnUicmE8v3u v5vEb9hJWIqzTQHOnCLXIjDfKtBsQ2LbewoLY5rsyggnKRb5JPKcXNX7ZZ2AIvjQq/mO To8aKRPOIUU4Ra//Urh6ZeskdIUaOZvAHN1++NECoRLka4uWEK2mIgySQ8zKePrPrmeR rcdFSc8EOcfhtbdSECjbAkZMXGFU3HilhnywKAFGav0ob3UJrB/4rNTTyY/zImpXvR8V T8iA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=gnDZUf0vki0sPNeXvEtwdFiRsIcEzBnP82Ml8J3HGeI=; b=jAfR68U9Lv/C2sNWhSpsEUPOvtAs4bK6JfwL1S1SsxGaHaN2KCUQTSUgQUFi1Pr22s qcNDbuODjXG/wm3CEFEShfNXbbBE8nvOUv8LUfPKAe2StHt6DZIjXr0SyFYz3VZfwwtK vVneu2qSkUK7pBO3466SdNFcYrl0+qUCZHkWf1FDt6OMb+Sf9dl5R43OH10PDlZgnrlQ RK2KeF6KhZfd1D13A2e8hRw9MUs/S/x1X8Z5TyIZUNy/JvdYtT80f3+38fYMhjOxQY/W 3+Xj6978IQK0ipl4gdkMUWQMPWGCgCg/kLv9RKS6i88jMYQVAdPCDYAuFmLUKnTO1sdg suFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sangfor.com.cn Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x4-20020a17090a2b0400b0023755d1a1e1si10675596pjc.28.2023.04.17.01.09.34; Mon, 17 Apr 2023 01:09:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sangfor.com.cn Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230375AbjDQHtP (ORCPT <rfc822;leviz.kernel.dev@gmail.com> + 99 others); Mon, 17 Apr 2023 03:49:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbjDQHtD (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 17 Apr 2023 03:49:03 -0400 Received: from mail-m11875.qiye.163.com (mail-m11875.qiye.163.com [115.236.118.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF6493C29; Mon, 17 Apr 2023 00:48:58 -0700 (PDT) Received: from localhost.localdomain (unknown [113.92.156.116]) by mail-m11875.qiye.163.com (Hmail) with ESMTPA id 49C57281142; Mon, 17 Apr 2023 15:40:54 +0800 (CST) From: Ding Hui <dinghui@sangfor.com.cn> To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, intel-wired-lan@lists.osuosl.org, jesse.brandeburg@intel.com, anthony.l.nguyen@intel.com Cc: keescook@chromium.org, grzegorzx.szczurek@intel.com, mateusz.palczewski@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org, pengdonglin@sangfor.com.cn, huangcun@sangfor.com.cn, Ding Hui <dinghui@sangfor.com.cn> Subject: [RESEND PATCH net 1/2] iavf: Fix use-after-free in free_netdev Date: Mon, 17 Apr 2023 15:40:15 +0800 Message-Id: <20230417074016.3920-2-dinghui@sangfor.com.cn> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230417074016.3920-1-dinghui@sangfor.com.cn> References: <20230417074016.3920-1-dinghui@sangfor.com.cn> X-HM-Spam-Status: e1kfGhgUHx5ZQUpXWQgPGg8OCBgUHx5ZQUlOS1dZFg8aDwILHllBWSg2Ly tZV1koWUFITzdXWS1ZQUlXWQ8JGhUIEh9ZQVlCHUxPVkNLHUNMGRkYGh4ZTFUTARMWGhIXJBQOD1 lXWRgSC1lBWUpKSFVCSVVKTk1VSkpNWVdZFhoPEhUdFFlBWU9LSFVKSktISkNVSktLVUtZBg++ X-HM-Tid: 0a878e28da782eb1kusn49c57281142 X-HM-MType: 1 X-HM-Sender-Digest: e1kMHhlZQR0aFwgeV1kSHx4VD1lBWUc6Nww6GRw5ET0UAh1PEg46PVER ED0wFBNVSlVKTUNKTEpMSU5OSEJIVTMWGhIXVR8SFRwTDhI7CBoVHB0UCVUYFBZVGBVFWVdZEgtZ QVlKSkhVQklVSk5NVUpKTVlXWQgBWUFDQ0tDNwY+ X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763410167910254931?= X-GMAIL-MSGID: =?utf-8?q?1763410167910254931?= |
Series |
iavf: Fix issues when setting channels concurrency with removing
|
|
Commit Message
Ding Hui
April 17, 2023, 7:40 a.m. UTC
We do netif_napi_add() for all allocated q_vectors[], but potentially
do netif_napi_del() for part of them, then kfree q_vectors and lefted
invalid pointers at dev->napi_list.
If num_active_queues is changed to less than allocated q_vectors[] by
unexpected, when iavf_remove, we might see UAF in free_netdev like this:
[ 4093.900222] ==================================================================
[ 4093.900230] BUG: KASAN: use-after-free in free_netdev+0x308/0x390
[ 4093.900232] Read of size 8 at addr ffff88b4dc145640 by task test-iavf-1.sh/6699
[ 4093.900233]
[ 4093.900236] CPU: 10 PID: 6699 Comm: test-iavf-1.sh Kdump: loaded Tainted: G O --------- -t - 4.18.0 #1
[ 4093.900238] Hardware name: Powerleader PR2008AL/H12DSi-N6, BIOS 2.0 04/09/2021
[ 4093.900239] Call Trace:
[ 4093.900244] dump_stack+0x71/0xab
[ 4093.900249] print_address_description+0x6b/0x290
[ 4093.900251] ? free_netdev+0x308/0x390
[ 4093.900252] kasan_report+0x14a/0x2b0
[ 4093.900254] free_netdev+0x308/0x390
[ 4093.900261] iavf_remove+0x825/0xd20 [iavf]
[ 4093.900265] pci_device_remove+0xa8/0x1f0
[ 4093.900268] device_release_driver_internal+0x1c6/0x460
[ 4093.900271] pci_stop_bus_device+0x101/0x150
[ 4093.900273] pci_stop_and_remove_bus_device+0xe/0x20
[ 4093.900275] pci_iov_remove_virtfn+0x187/0x420
[ 4093.900277] ? pci_iov_add_virtfn+0xe10/0xe10
[ 4093.900278] ? pci_get_subsys+0x90/0x90
[ 4093.900280] sriov_disable+0xed/0x3e0
[ 4093.900282] ? bus_find_device+0x12d/0x1a0
[ 4093.900290] i40e_free_vfs+0x754/0x1210 [i40e]
[ 4093.900298] ? i40e_reset_all_vfs+0x880/0x880 [i40e]
[ 4093.900299] ? pci_get_device+0x7c/0x90
[ 4093.900300] ? pci_get_subsys+0x90/0x90
[ 4093.900306] ? pci_vfs_assigned.part.7+0x144/0x210
[ 4093.900309] ? __mutex_lock_slowpath+0x10/0x10
[ 4093.900315] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e]
[ 4093.900318] sriov_numvfs_store+0x214/0x290
[ 4093.900320] ? sriov_totalvfs_show+0x30/0x30
[ 4093.900321] ? __mutex_lock_slowpath+0x10/0x10
[ 4093.900323] ? __check_object_size+0x15a/0x350
[ 4093.900326] kernfs_fop_write+0x280/0x3f0
[ 4093.900329] vfs_write+0x145/0x440
[ 4093.900330] ksys_write+0xab/0x160
[ 4093.900332] ? __ia32_sys_read+0xb0/0xb0
[ 4093.900334] ? fput_many+0x1a/0x120
[ 4093.900335] ? filp_close+0xf0/0x130
[ 4093.900338] do_syscall_64+0xa0/0x370
[ 4093.900339] ? page_fault+0x8/0x30
[ 4093.900341] entry_SYSCALL_64_after_hwframe+0x65/0xca
[ 4093.900357] RIP: 0033:0x7f16ad4d22c0
[ 4093.900359] Code: 73 01 c3 48 8b 0d d8 cb 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 89 24 2d 00 00 75 10 b8 01 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 fe dd 01 00 48 89 04 24
[ 4093.900360] RSP: 002b:00007ffd6491b7f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 4093.900362] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f16ad4d22c0
[ 4093.900363] RDX: 0000000000000002 RSI: 0000000001a41408 RDI: 0000000000000001
[ 4093.900364] RBP: 0000000001a41408 R08: 00007f16ad7a1780 R09: 00007f16ae1f2700
[ 4093.900364] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000002
[ 4093.900365] R13: 0000000000000001 R14: 00007f16ad7a0620 R15: 0000000000000001
[ 4093.900367]
[ 4093.900368] Allocated by task 820:
[ 4093.900371] kasan_kmalloc+0xa6/0xd0
[ 4093.900373] __kmalloc+0xfb/0x200
[ 4093.900376] iavf_init_interrupt_scheme+0x63b/0x1320 [iavf]
[ 4093.900380] iavf_watchdog_task+0x3d51/0x52c0 [iavf]
[ 4093.900382] process_one_work+0x56a/0x11f0
[ 4093.900383] worker_thread+0x8f/0xf40
[ 4093.900384] kthread+0x2a0/0x390
[ 4093.900385] ret_from_fork+0x1f/0x40
[ 4093.900387] 0xffffffffffffffff
[ 4093.900387]
[ 4093.900388] Freed by task 6699:
[ 4093.900390] __kasan_slab_free+0x137/0x190
[ 4093.900391] kfree+0x8b/0x1b0
[ 4093.900394] iavf_free_q_vectors+0x11d/0x1a0 [iavf]
[ 4093.900397] iavf_remove+0x35a/0xd20 [iavf]
[ 4093.900399] pci_device_remove+0xa8/0x1f0
[ 4093.900400] device_release_driver_internal+0x1c6/0x460
[ 4093.900401] pci_stop_bus_device+0x101/0x150
[ 4093.900402] pci_stop_and_remove_bus_device+0xe/0x20
[ 4093.900403] pci_iov_remove_virtfn+0x187/0x420
[ 4093.900404] sriov_disable+0xed/0x3e0
[ 4093.900409] i40e_free_vfs+0x754/0x1210 [i40e]
[ 4093.900415] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e]
[ 4093.900416] sriov_numvfs_store+0x214/0x290
[ 4093.900417] kernfs_fop_write+0x280/0x3f0
[ 4093.900418] vfs_write+0x145/0x440
[ 4093.900419] ksys_write+0xab/0x160
[ 4093.900420] do_syscall_64+0xa0/0x370
[ 4093.900421] entry_SYSCALL_64_after_hwframe+0x65/0xca
[ 4093.900422] 0xffffffffffffffff
[ 4093.900422]
[ 4093.900424] The buggy address belongs to the object at ffff88b4dc144200
which belongs to the cache kmalloc-8k of size 8192
[ 4093.900425] The buggy address is located 5184 bytes inside of
8192-byte region [ffff88b4dc144200, ffff88b4dc146200)
[ 4093.900425] The buggy address belongs to the page:
[ 4093.900427] page:ffffea00d3705000 refcount:1 mapcount:0 mapping:ffff88bf04415c80 index:0x0 compound_mapcount: 0
[ 4093.900430] flags: 0x10000000008100(slab|head)
[ 4093.900433] raw: 0010000000008100 dead000000000100 dead000000000200 ffff88bf04415c80
[ 4093.900434] raw: 0000000000000000 0000000000030003 00000001ffffffff 0000000000000000
[ 4093.900434] page dumped because: kasan: bad access detected
[ 4093.900435]
[ 4093.900435] Memory state around the buggy address:
[ 4093.900436] ffff88b4dc145500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 4093.900437] ffff88b4dc145580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 4093.900438] >ffff88b4dc145600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 4093.900438] ^
[ 4093.900439] ffff88b4dc145680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 4093.900440] ffff88b4dc145700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 4093.900440] ==================================================================
Fix it by letting netif_napi_del() match to netif_napi_add().
Signed-off-by: Ding Hui <dinghui@sangfor.com.cn>
Cc: Donglin Peng <pengdonglin@sangfor.com.cn>
CC: Huang Cun <huangcun@sangfor.com.cn>
---
drivers/net/ethernet/intel/iavf/iavf_main.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
Comments
Hi Ding Hui, On Mon, Apr 17, 2023 at 03:40:15PM +0800, Ding Hui wrote: > We do netif_napi_add() for all allocated q_vectors[], but potentially > do netif_napi_del() for part of them, then kfree q_vectors and lefted nit: lefted -> leave > invalid pointers at dev->napi_list. > > If num_active_queues is changed to less than allocated q_vectors[] by > unexpected, when iavf_remove, we might see UAF in free_netdev like this: > > [ 4093.900222] ================================================================== > [ 4093.900230] BUG: KASAN: use-after-free in free_netdev+0x308/0x390 > [ 4093.900232] Read of size 8 at addr ffff88b4dc145640 by task test-iavf-1.sh/6699 > [ 4093.900233] > [ 4093.900236] CPU: 10 PID: 6699 Comm: test-iavf-1.sh Kdump: loaded Tainted: G O --------- -t - 4.18.0 #1 > [ 4093.900238] Hardware name: Powerleader PR2008AL/H12DSi-N6, BIOS 2.0 04/09/2021 > [ 4093.900239] Call Trace: > [ 4093.900244] dump_stack+0x71/0xab > [ 4093.900249] print_address_description+0x6b/0x290 > [ 4093.900251] ? free_netdev+0x308/0x390 > [ 4093.900252] kasan_report+0x14a/0x2b0 > [ 4093.900254] free_netdev+0x308/0x390 > [ 4093.900261] iavf_remove+0x825/0xd20 [iavf] > [ 4093.900265] pci_device_remove+0xa8/0x1f0 > [ 4093.900268] device_release_driver_internal+0x1c6/0x460 > [ 4093.900271] pci_stop_bus_device+0x101/0x150 > [ 4093.900273] pci_stop_and_remove_bus_device+0xe/0x20 > [ 4093.900275] pci_iov_remove_virtfn+0x187/0x420 > [ 4093.900277] ? pci_iov_add_virtfn+0xe10/0xe10 > [ 4093.900278] ? pci_get_subsys+0x90/0x90 > [ 4093.900280] sriov_disable+0xed/0x3e0 > [ 4093.900282] ? bus_find_device+0x12d/0x1a0 > [ 4093.900290] i40e_free_vfs+0x754/0x1210 [i40e] > [ 4093.900298] ? i40e_reset_all_vfs+0x880/0x880 [i40e] > [ 4093.900299] ? pci_get_device+0x7c/0x90 > [ 4093.900300] ? pci_get_subsys+0x90/0x90 > [ 4093.900306] ? pci_vfs_assigned.part.7+0x144/0x210 > [ 4093.900309] ? __mutex_lock_slowpath+0x10/0x10 > [ 4093.900315] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e] > [ 4093.900318] sriov_numvfs_store+0x214/0x290 > [ 4093.900320] ? sriov_totalvfs_show+0x30/0x30 > [ 4093.900321] ? __mutex_lock_slowpath+0x10/0x10 > [ 4093.900323] ? __check_object_size+0x15a/0x350 > [ 4093.900326] kernfs_fop_write+0x280/0x3f0 > [ 4093.900329] vfs_write+0x145/0x440 > [ 4093.900330] ksys_write+0xab/0x160 > [ 4093.900332] ? __ia32_sys_read+0xb0/0xb0 > [ 4093.900334] ? fput_many+0x1a/0x120 > [ 4093.900335] ? filp_close+0xf0/0x130 > [ 4093.900338] do_syscall_64+0xa0/0x370 > [ 4093.900339] ? page_fault+0x8/0x30 > [ 4093.900341] entry_SYSCALL_64_after_hwframe+0x65/0xca > [ 4093.900357] RIP: 0033:0x7f16ad4d22c0 > [ 4093.900359] Code: 73 01 c3 48 8b 0d d8 cb 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 89 24 2d 00 00 75 10 b8 01 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 fe dd 01 00 48 89 04 24 > [ 4093.900360] RSP: 002b:00007ffd6491b7f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 > [ 4093.900362] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f16ad4d22c0 > [ 4093.900363] RDX: 0000000000000002 RSI: 0000000001a41408 RDI: 0000000000000001 > [ 4093.900364] RBP: 0000000001a41408 R08: 00007f16ad7a1780 R09: 00007f16ae1f2700 > [ 4093.900364] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000002 > [ 4093.900365] R13: 0000000000000001 R14: 00007f16ad7a0620 R15: 0000000000000001 > [ 4093.900367] > [ 4093.900368] Allocated by task 820: > [ 4093.900371] kasan_kmalloc+0xa6/0xd0 > [ 4093.900373] __kmalloc+0xfb/0x200 > [ 4093.900376] iavf_init_interrupt_scheme+0x63b/0x1320 [iavf] > [ 4093.900380] iavf_watchdog_task+0x3d51/0x52c0 [iavf] > [ 4093.900382] process_one_work+0x56a/0x11f0 > [ 4093.900383] worker_thread+0x8f/0xf40 > [ 4093.900384] kthread+0x2a0/0x390 > [ 4093.900385] ret_from_fork+0x1f/0x40 > [ 4093.900387] 0xffffffffffffffff > [ 4093.900387] > [ 4093.900388] Freed by task 6699: > [ 4093.900390] __kasan_slab_free+0x137/0x190 > [ 4093.900391] kfree+0x8b/0x1b0 > [ 4093.900394] iavf_free_q_vectors+0x11d/0x1a0 [iavf] > [ 4093.900397] iavf_remove+0x35a/0xd20 [iavf] > [ 4093.900399] pci_device_remove+0xa8/0x1f0 > [ 4093.900400] device_release_driver_internal+0x1c6/0x460 > [ 4093.900401] pci_stop_bus_device+0x101/0x150 > [ 4093.900402] pci_stop_and_remove_bus_device+0xe/0x20 > [ 4093.900403] pci_iov_remove_virtfn+0x187/0x420 > [ 4093.900404] sriov_disable+0xed/0x3e0 > [ 4093.900409] i40e_free_vfs+0x754/0x1210 [i40e] > [ 4093.900415] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e] > [ 4093.900416] sriov_numvfs_store+0x214/0x290 > [ 4093.900417] kernfs_fop_write+0x280/0x3f0 > [ 4093.900418] vfs_write+0x145/0x440 > [ 4093.900419] ksys_write+0xab/0x160 > [ 4093.900420] do_syscall_64+0xa0/0x370 > [ 4093.900421] entry_SYSCALL_64_after_hwframe+0x65/0xca > [ 4093.900422] 0xffffffffffffffff > [ 4093.900422] > [ 4093.900424] The buggy address belongs to the object at ffff88b4dc144200 > which belongs to the cache kmalloc-8k of size 8192 > [ 4093.900425] The buggy address is located 5184 bytes inside of > 8192-byte region [ffff88b4dc144200, ffff88b4dc146200) > [ 4093.900425] The buggy address belongs to the page: > [ 4093.900427] page:ffffea00d3705000 refcount:1 mapcount:0 mapping:ffff88bf04415c80 index:0x0 compound_mapcount: 0 > [ 4093.900430] flags: 0x10000000008100(slab|head) > [ 4093.900433] raw: 0010000000008100 dead000000000100 dead000000000200 ffff88bf04415c80 > [ 4093.900434] raw: 0000000000000000 0000000000030003 00000001ffffffff 0000000000000000 > [ 4093.900434] page dumped because: kasan: bad access detected > [ 4093.900435] > [ 4093.900435] Memory state around the buggy address: > [ 4093.900436] ffff88b4dc145500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > [ 4093.900437] ffff88b4dc145580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > [ 4093.900438] >ffff88b4dc145600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > [ 4093.900438] ^ > [ 4093.900439] ffff88b4dc145680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > [ 4093.900440] ffff88b4dc145700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > [ 4093.900440] ================================================================== > > Fix it by letting netif_napi_del() match to netif_napi_add(). > > Signed-off-by: Ding Hui <dinghui@sangfor.com.cn> > Cc: Donglin Peng <pengdonglin@sangfor.com.cn> > CC: Huang Cun <huangcun@sangfor.com.cn> as this is a fix it probably should have a fixes tag. I wonder if it should be: Fixes: cc0529271f23 ("i40evf: don't use more queues than CPUs") Code change looks good to me. Reviewed-by: Simon Horman <simon.horman@corigine.com> > --- > drivers/net/ethernet/intel/iavf/iavf_main.c | 6 +----- > 1 file changed, 1 insertion(+), 5 deletions(-) > > diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c > index 095201e83c9d..a57e3425f960 100644 > --- a/drivers/net/ethernet/intel/iavf/iavf_main.c > +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c > @@ -1849,19 +1849,15 @@ static int iavf_alloc_q_vectors(struct iavf_adapter *adapter) > static void iavf_free_q_vectors(struct iavf_adapter *adapter) > { > int q_idx, num_q_vectors; > - int napi_vectors; > > if (!adapter->q_vectors) > return; > > num_q_vectors = adapter->num_msix_vectors - NONQ_VECS; > - napi_vectors = adapter->num_active_queues; > > for (q_idx = 0; q_idx < num_q_vectors; q_idx++) { > struct iavf_q_vector *q_vector = &adapter->q_vectors[q_idx]; > - > - if (q_idx < napi_vectors) > - netif_napi_del(&q_vector->napi); > + netif_napi_del(&q_vector->napi); > } > kfree(adapter->q_vectors); > adapter->q_vectors = NULL; > -- > 2.17.1 >
On Tue, Apr 18, 2023 at 09:48:38PM +0200, Simon Horman wrote: > Hi Ding Hui, > > On Mon, Apr 17, 2023 at 03:40:15PM +0800, Ding Hui wrote: > > We do netif_napi_add() for all allocated q_vectors[], but potentially > > do netif_napi_del() for part of them, then kfree q_vectors and lefted > > nit: lefted -> leave > > > invalid pointers at dev->napi_list. > > > > If num_active_queues is changed to less than allocated q_vectors[] by > > unexpected, when iavf_remove, we might see UAF in free_netdev like this: > > > > [ 4093.900222] ================================================================== > > [ 4093.900230] BUG: KASAN: use-after-free in free_netdev+0x308/0x390 > > [ 4093.900232] Read of size 8 at addr ffff88b4dc145640 by task test-iavf-1.sh/6699 > > [ 4093.900233] > > [ 4093.900236] CPU: 10 PID: 6699 Comm: test-iavf-1.sh Kdump: loaded Tainted: G O --------- -t - 4.18.0 #1 > > [ 4093.900238] Hardware name: Powerleader PR2008AL/H12DSi-N6, BIOS 2.0 04/09/2021 > > [ 4093.900239] Call Trace: > > [ 4093.900244] dump_stack+0x71/0xab > > [ 4093.900249] print_address_description+0x6b/0x290 > > [ 4093.900251] ? free_netdev+0x308/0x390 > > [ 4093.900252] kasan_report+0x14a/0x2b0 > > [ 4093.900254] free_netdev+0x308/0x390 > > [ 4093.900261] iavf_remove+0x825/0xd20 [iavf] > > [ 4093.900265] pci_device_remove+0xa8/0x1f0 > > [ 4093.900268] device_release_driver_internal+0x1c6/0x460 > > [ 4093.900271] pci_stop_bus_device+0x101/0x150 > > [ 4093.900273] pci_stop_and_remove_bus_device+0xe/0x20 > > [ 4093.900275] pci_iov_remove_virtfn+0x187/0x420 > > [ 4093.900277] ? pci_iov_add_virtfn+0xe10/0xe10 > > [ 4093.900278] ? pci_get_subsys+0x90/0x90 > > [ 4093.900280] sriov_disable+0xed/0x3e0 > > [ 4093.900282] ? bus_find_device+0x12d/0x1a0 > > [ 4093.900290] i40e_free_vfs+0x754/0x1210 [i40e] > > [ 4093.900298] ? i40e_reset_all_vfs+0x880/0x880 [i40e] > > [ 4093.900299] ? pci_get_device+0x7c/0x90 > > [ 4093.900300] ? pci_get_subsys+0x90/0x90 > > [ 4093.900306] ? pci_vfs_assigned.part.7+0x144/0x210 > > [ 4093.900309] ? __mutex_lock_slowpath+0x10/0x10 > > [ 4093.900315] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e] > > [ 4093.900318] sriov_numvfs_store+0x214/0x290 > > [ 4093.900320] ? sriov_totalvfs_show+0x30/0x30 > > [ 4093.900321] ? __mutex_lock_slowpath+0x10/0x10 > > [ 4093.900323] ? __check_object_size+0x15a/0x350 > > [ 4093.900326] kernfs_fop_write+0x280/0x3f0 > > [ 4093.900329] vfs_write+0x145/0x440 > > [ 4093.900330] ksys_write+0xab/0x160 > > [ 4093.900332] ? __ia32_sys_read+0xb0/0xb0 > > [ 4093.900334] ? fput_many+0x1a/0x120 > > [ 4093.900335] ? filp_close+0xf0/0x130 > > [ 4093.900338] do_syscall_64+0xa0/0x370 > > [ 4093.900339] ? page_fault+0x8/0x30 > > [ 4093.900341] entry_SYSCALL_64_after_hwframe+0x65/0xca > > [ 4093.900357] RIP: 0033:0x7f16ad4d22c0 > > [ 4093.900359] Code: 73 01 c3 48 8b 0d d8 cb 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 89 24 2d 00 00 75 10 b8 01 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 fe dd 01 00 48 89 04 24 > > [ 4093.900360] RSP: 002b:00007ffd6491b7f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 > > [ 4093.900362] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f16ad4d22c0 > > [ 4093.900363] RDX: 0000000000000002 RSI: 0000000001a41408 RDI: 0000000000000001 > > [ 4093.900364] RBP: 0000000001a41408 R08: 00007f16ad7a1780 R09: 00007f16ae1f2700 > > [ 4093.900364] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000002 > > [ 4093.900365] R13: 0000000000000001 R14: 00007f16ad7a0620 R15: 0000000000000001 > > [ 4093.900367] > > [ 4093.900368] Allocated by task 820: > > [ 4093.900371] kasan_kmalloc+0xa6/0xd0 > > [ 4093.900373] __kmalloc+0xfb/0x200 > > [ 4093.900376] iavf_init_interrupt_scheme+0x63b/0x1320 [iavf] > > [ 4093.900380] iavf_watchdog_task+0x3d51/0x52c0 [iavf] > > [ 4093.900382] process_one_work+0x56a/0x11f0 > > [ 4093.900383] worker_thread+0x8f/0xf40 > > [ 4093.900384] kthread+0x2a0/0x390 > > [ 4093.900385] ret_from_fork+0x1f/0x40 > > [ 4093.900387] 0xffffffffffffffff > > [ 4093.900387] > > [ 4093.900388] Freed by task 6699: > > [ 4093.900390] __kasan_slab_free+0x137/0x190 > > [ 4093.900391] kfree+0x8b/0x1b0 > > [ 4093.900394] iavf_free_q_vectors+0x11d/0x1a0 [iavf] > > [ 4093.900397] iavf_remove+0x35a/0xd20 [iavf] > > [ 4093.900399] pci_device_remove+0xa8/0x1f0 > > [ 4093.900400] device_release_driver_internal+0x1c6/0x460 > > [ 4093.900401] pci_stop_bus_device+0x101/0x150 > > [ 4093.900402] pci_stop_and_remove_bus_device+0xe/0x20 > > [ 4093.900403] pci_iov_remove_virtfn+0x187/0x420 > > [ 4093.900404] sriov_disable+0xed/0x3e0 > > [ 4093.900409] i40e_free_vfs+0x754/0x1210 [i40e] > > [ 4093.900415] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e] > > [ 4093.900416] sriov_numvfs_store+0x214/0x290 > > [ 4093.900417] kernfs_fop_write+0x280/0x3f0 > > [ 4093.900418] vfs_write+0x145/0x440 > > [ 4093.900419] ksys_write+0xab/0x160 > > [ 4093.900420] do_syscall_64+0xa0/0x370 > > [ 4093.900421] entry_SYSCALL_64_after_hwframe+0x65/0xca > > [ 4093.900422] 0xffffffffffffffff > > [ 4093.900422] > > [ 4093.900424] The buggy address belongs to the object at ffff88b4dc144200 > > which belongs to the cache kmalloc-8k of size 8192 > > [ 4093.900425] The buggy address is located 5184 bytes inside of > > 8192-byte region [ffff88b4dc144200, ffff88b4dc146200) > > [ 4093.900425] The buggy address belongs to the page: > > [ 4093.900427] page:ffffea00d3705000 refcount:1 mapcount:0 mapping:ffff88bf04415c80 index:0x0 compound_mapcount: 0 > > [ 4093.900430] flags: 0x10000000008100(slab|head) > > [ 4093.900433] raw: 0010000000008100 dead000000000100 dead000000000200 ffff88bf04415c80 > > [ 4093.900434] raw: 0000000000000000 0000000000030003 00000001ffffffff 0000000000000000 > > [ 4093.900434] page dumped because: kasan: bad access detected > > [ 4093.900435] > > [ 4093.900435] Memory state around the buggy address: > > [ 4093.900436] ffff88b4dc145500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > [ 4093.900437] ffff88b4dc145580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > [ 4093.900438] >ffff88b4dc145600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > [ 4093.900438] ^ > > [ 4093.900439] ffff88b4dc145680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > [ 4093.900440] ffff88b4dc145700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > [ 4093.900440] ================================================================== > > > > Fix it by letting netif_napi_del() match to netif_napi_add(). > > > > Signed-off-by: Ding Hui <dinghui@sangfor.com.cn> > > Cc: Donglin Peng <pengdonglin@sangfor.com.cn> > > CC: Huang Cun <huangcun@sangfor.com.cn> Oops, the comment below was meant for - [RESEND PATCH net 1/2] iavf: Fix use-after-free in free_netdev > as this is a fix it probably should have a fixes tag. > I wonder if it should be: > > Fixes: cc0529271f23 ("i40evf: don't use more queues than CPUs") > > Code change looks good to me. > > Reviewed-by: Simon Horman <simon.horman@corigine.com>
On Tue, Apr 18, 2023 at 09:50:17PM +0200, Simon Horman wrote: > On Tue, Apr 18, 2023 at 09:48:38PM +0200, Simon Horman wrote: > > Hi Ding Hui, > > > > On Mon, Apr 17, 2023 at 03:40:15PM +0800, Ding Hui wrote: > > > We do netif_napi_add() for all allocated q_vectors[], but potentially > > > do netif_napi_del() for part of them, then kfree q_vectors and lefted > > > > nit: lefted -> leave > > > > > invalid pointers at dev->napi_list. > > > > > > If num_active_queues is changed to less than allocated q_vectors[] by > > > unexpected, when iavf_remove, we might see UAF in free_netdev like this: > > > > > > [ 4093.900222] ================================================================== > > > [ 4093.900230] BUG: KASAN: use-after-free in free_netdev+0x308/0x390 > > > [ 4093.900232] Read of size 8 at addr ffff88b4dc145640 by task test-iavf-1.sh/6699 > > > [ 4093.900233] > > > [ 4093.900236] CPU: 10 PID: 6699 Comm: test-iavf-1.sh Kdump: loaded Tainted: G O --------- -t - 4.18.0 #1 > > > [ 4093.900238] Hardware name: Powerleader PR2008AL/H12DSi-N6, BIOS 2.0 04/09/2021 > > > [ 4093.900239] Call Trace: > > > [ 4093.900244] dump_stack+0x71/0xab > > > [ 4093.900249] print_address_description+0x6b/0x290 > > > [ 4093.900251] ? free_netdev+0x308/0x390 > > > [ 4093.900252] kasan_report+0x14a/0x2b0 > > > [ 4093.900254] free_netdev+0x308/0x390 > > > [ 4093.900261] iavf_remove+0x825/0xd20 [iavf] > > > [ 4093.900265] pci_device_remove+0xa8/0x1f0 > > > [ 4093.900268] device_release_driver_internal+0x1c6/0x460 > > > [ 4093.900271] pci_stop_bus_device+0x101/0x150 > > > [ 4093.900273] pci_stop_and_remove_bus_device+0xe/0x20 > > > [ 4093.900275] pci_iov_remove_virtfn+0x187/0x420 > > > [ 4093.900277] ? pci_iov_add_virtfn+0xe10/0xe10 > > > [ 4093.900278] ? pci_get_subsys+0x90/0x90 > > > [ 4093.900280] sriov_disable+0xed/0x3e0 > > > [ 4093.900282] ? bus_find_device+0x12d/0x1a0 > > > [ 4093.900290] i40e_free_vfs+0x754/0x1210 [i40e] > > > [ 4093.900298] ? i40e_reset_all_vfs+0x880/0x880 [i40e] > > > [ 4093.900299] ? pci_get_device+0x7c/0x90 > > > [ 4093.900300] ? pci_get_subsys+0x90/0x90 > > > [ 4093.900306] ? pci_vfs_assigned.part.7+0x144/0x210 > > > [ 4093.900309] ? __mutex_lock_slowpath+0x10/0x10 > > > [ 4093.900315] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e] > > > [ 4093.900318] sriov_numvfs_store+0x214/0x290 > > > [ 4093.900320] ? sriov_totalvfs_show+0x30/0x30 > > > [ 4093.900321] ? __mutex_lock_slowpath+0x10/0x10 > > > [ 4093.900323] ? __check_object_size+0x15a/0x350 > > > [ 4093.900326] kernfs_fop_write+0x280/0x3f0 > > > [ 4093.900329] vfs_write+0x145/0x440 > > > [ 4093.900330] ksys_write+0xab/0x160 > > > [ 4093.900332] ? __ia32_sys_read+0xb0/0xb0 > > > [ 4093.900334] ? fput_many+0x1a/0x120 > > > [ 4093.900335] ? filp_close+0xf0/0x130 > > > [ 4093.900338] do_syscall_64+0xa0/0x370 > > > [ 4093.900339] ? page_fault+0x8/0x30 > > > [ 4093.900341] entry_SYSCALL_64_after_hwframe+0x65/0xca > > > [ 4093.900357] RIP: 0033:0x7f16ad4d22c0 > > > [ 4093.900359] Code: 73 01 c3 48 8b 0d d8 cb 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 89 24 2d 00 00 75 10 b8 01 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 fe dd 01 00 48 89 04 24 > > > [ 4093.900360] RSP: 002b:00007ffd6491b7f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 > > > [ 4093.900362] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f16ad4d22c0 > > > [ 4093.900363] RDX: 0000000000000002 RSI: 0000000001a41408 RDI: 0000000000000001 > > > [ 4093.900364] RBP: 0000000001a41408 R08: 00007f16ad7a1780 R09: 00007f16ae1f2700 > > > [ 4093.900364] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000002 > > > [ 4093.900365] R13: 0000000000000001 R14: 00007f16ad7a0620 R15: 0000000000000001 > > > [ 4093.900367] > > > [ 4093.900368] Allocated by task 820: > > > [ 4093.900371] kasan_kmalloc+0xa6/0xd0 > > > [ 4093.900373] __kmalloc+0xfb/0x200 > > > [ 4093.900376] iavf_init_interrupt_scheme+0x63b/0x1320 [iavf] > > > [ 4093.900380] iavf_watchdog_task+0x3d51/0x52c0 [iavf] > > > [ 4093.900382] process_one_work+0x56a/0x11f0 > > > [ 4093.900383] worker_thread+0x8f/0xf40 > > > [ 4093.900384] kthread+0x2a0/0x390 > > > [ 4093.900385] ret_from_fork+0x1f/0x40 > > > [ 4093.900387] 0xffffffffffffffff > > > [ 4093.900387] > > > [ 4093.900388] Freed by task 6699: > > > [ 4093.900390] __kasan_slab_free+0x137/0x190 > > > [ 4093.900391] kfree+0x8b/0x1b0 > > > [ 4093.900394] iavf_free_q_vectors+0x11d/0x1a0 [iavf] > > > [ 4093.900397] iavf_remove+0x35a/0xd20 [iavf] > > > [ 4093.900399] pci_device_remove+0xa8/0x1f0 > > > [ 4093.900400] device_release_driver_internal+0x1c6/0x460 > > > [ 4093.900401] pci_stop_bus_device+0x101/0x150 > > > [ 4093.900402] pci_stop_and_remove_bus_device+0xe/0x20 > > > [ 4093.900403] pci_iov_remove_virtfn+0x187/0x420 > > > [ 4093.900404] sriov_disable+0xed/0x3e0 > > > [ 4093.900409] i40e_free_vfs+0x754/0x1210 [i40e] > > > [ 4093.900415] i40e_pci_sriov_configure+0x1fa/0x2e0 [i40e] > > > [ 4093.900416] sriov_numvfs_store+0x214/0x290 > > > [ 4093.900417] kernfs_fop_write+0x280/0x3f0 > > > [ 4093.900418] vfs_write+0x145/0x440 > > > [ 4093.900419] ksys_write+0xab/0x160 > > > [ 4093.900420] do_syscall_64+0xa0/0x370 > > > [ 4093.900421] entry_SYSCALL_64_after_hwframe+0x65/0xca > > > [ 4093.900422] 0xffffffffffffffff > > > [ 4093.900422] > > > [ 4093.900424] The buggy address belongs to the object at ffff88b4dc144200 > > > which belongs to the cache kmalloc-8k of size 8192 > > > [ 4093.900425] The buggy address is located 5184 bytes inside of > > > 8192-byte region [ffff88b4dc144200, ffff88b4dc146200) > > > [ 4093.900425] The buggy address belongs to the page: > > > [ 4093.900427] page:ffffea00d3705000 refcount:1 mapcount:0 mapping:ffff88bf04415c80 index:0x0 compound_mapcount: 0 > > > [ 4093.900430] flags: 0x10000000008100(slab|head) > > > [ 4093.900433] raw: 0010000000008100 dead000000000100 dead000000000200 ffff88bf04415c80 > > > [ 4093.900434] raw: 0000000000000000 0000000000030003 00000001ffffffff 0000000000000000 > > > [ 4093.900434] page dumped because: kasan: bad access detected > > > [ 4093.900435] > > > [ 4093.900435] Memory state around the buggy address: > > > [ 4093.900436] ffff88b4dc145500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > > [ 4093.900437] ffff88b4dc145580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > > [ 4093.900438] >ffff88b4dc145600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > > [ 4093.900438] ^ > > > [ 4093.900439] ffff88b4dc145680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > > [ 4093.900440] ffff88b4dc145700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb > > > [ 4093.900440] ================================================================== > > > > > > Fix it by letting netif_napi_del() match to netif_napi_add(). > > > > > > Signed-off-by: Ding Hui <dinghui@sangfor.com.cn> > > > Cc: Donglin Peng <pengdonglin@sangfor.com.cn> > > > CC: Huang Cun <huangcun@sangfor.com.cn> > > Oops, the comment below was meant for > - [RESEND PATCH net 1/2] iavf: Fix use-after-free in free_netdev ... and this is patch 1/2. Sorry for (my own) confusion. > > as this is a fix it probably should have a fixes tag. > > I wonder if it should be: > > > > Fixes: cc0529271f23 ("i40evf: don't use more queues than CPUs") > > > > Code change looks good to me. > > > > Reviewed-by: Simon Horman <simon.horman@corigine.com>
On 2023/4/19 3:48, Simon Horman wrote: > Hi Ding Hui, > > On Mon, Apr 17, 2023 at 03:40:15PM +0800, Ding Hui wrote: >> We do netif_napi_add() for all allocated q_vectors[], but potentially >> do netif_napi_del() for part of them, then kfree q_vectors and lefted > > nit: lefted -> leave > Thanks, I'll update in v2. >> invalid pointers at dev->napi_list. >> >> If num_active_queues is changed to less than allocated q_vectors[] by >> unexpected, when iavf_remove, we might see UAF in free_netdev like this: >> ... >> >> Fix it by letting netif_napi_del() match to netif_napi_add(). >> >> Signed-off-by: Ding Hui <dinghui@sangfor.com.cn> >> Cc: Donglin Peng <pengdonglin@sangfor.com.cn> >> CC: Huang Cun <huangcun@sangfor.com.cn> > > as this is a fix it probably should have a fixes tag. > I wonder if it should be: > > Fixes: cc0529271f23 ("i40evf: don't use more queues than CPUs") I don't think so. I searched the git log, and found that the mismatched usage was introduced since the beginning of i40evf_main.c, so I'll add Fixes: 5eae00c57f5e ("i40evf: main driver core") in v2. > > Code change looks good to me. > > Reviewed-by: Simon Horman <simon.horman@corigine.com> > Thanks. And sorry for you confusion since my RESEND. >> --- >> drivers/net/ethernet/intel/iavf/iavf_main.c | 6 +----- >> 1 file changed, 1 insertion(+), 5 deletions(-) >> >> diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c >> index 095201e83c9d..a57e3425f960 100644 >> --- a/drivers/net/ethernet/intel/iavf/iavf_main.c >> +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c >> @@ -1849,19 +1849,15 @@ static int iavf_alloc_q_vectors(struct iavf_adapter *adapter) >> static void iavf_free_q_vectors(struct iavf_adapter *adapter) >> { >> int q_idx, num_q_vectors; >> - int napi_vectors; >> >> if (!adapter->q_vectors) >> return; >> >> num_q_vectors = adapter->num_msix_vectors - NONQ_VECS; >> - napi_vectors = adapter->num_active_queues; >> >> for (q_idx = 0; q_idx < num_q_vectors; q_idx++) { >> struct iavf_q_vector *q_vector = &adapter->q_vectors[q_idx]; >> - >> - if (q_idx < napi_vectors) >> - netif_napi_del(&q_vector->napi); >> + netif_napi_del(&q_vector->napi); >> } >> kfree(adapter->q_vectors); >> adapter->q_vectors = NULL; >> -- >> 2.17.1 >> >
On Wed, Apr 19, 2023 at 09:11:37AM +0800, Ding Hui wrote: > On 2023/4/19 3:48, Simon Horman wrote: > > Hi Ding Hui, > > > > On Mon, Apr 17, 2023 at 03:40:15PM +0800, Ding Hui wrote: > > > We do netif_napi_add() for all allocated q_vectors[], but potentially > > > do netif_napi_del() for part of them, then kfree q_vectors and lefted > > > > nit: lefted -> leave > > > > Thanks, I'll update in v2. > > > > invalid pointers at dev->napi_list. > > > > > > If num_active_queues is changed to less than allocated q_vectors[] by > > > unexpected, when iavf_remove, we might see UAF in free_netdev like this: > > > > > ... > > > > > > > Fix it by letting netif_napi_del() match to netif_napi_add(). > > > > > > Signed-off-by: Ding Hui <dinghui@sangfor.com.cn> > > > Cc: Donglin Peng <pengdonglin@sangfor.com.cn> > > > CC: Huang Cun <huangcun@sangfor.com.cn> > > > > as this is a fix it probably should have a fixes tag. > > I wonder if it should be: > > > > Fixes: cc0529271f23 ("i40evf: don't use more queues than CPUs") > > I don't think so. > I searched the git log, and found that the mismatched usage was > introduced since the beginning of i40evf_main.c, so I'll add > > Fixes: 5eae00c57f5e ("i40evf: main driver core") Yes, agreed, that is the right tag.
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 095201e83c9d..a57e3425f960 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1849,19 +1849,15 @@ static int iavf_alloc_q_vectors(struct iavf_adapter *adapter) static void iavf_free_q_vectors(struct iavf_adapter *adapter) { int q_idx, num_q_vectors; - int napi_vectors; if (!adapter->q_vectors) return; num_q_vectors = adapter->num_msix_vectors - NONQ_VECS; - napi_vectors = adapter->num_active_queues; for (q_idx = 0; q_idx < num_q_vectors; q_idx++) { struct iavf_q_vector *q_vector = &adapter->q_vectors[q_idx]; - - if (q_idx < napi_vectors) - netif_napi_del(&q_vector->napi); + netif_napi_del(&q_vector->napi); } kfree(adapter->q_vectors); adapter->q_vectors = NULL;