Message ID | 20231108012821.56104-1-junxiao.bi@oracle.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:aa0b:0:b0:403:3b70:6f57 with SMTP id k11csp628380vqo; Tue, 7 Nov 2023 17:29:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IGXp2QTYdkvYwtpyfOvrcMUhg/2BwBH4yS8GAz2aDqPxW0wTt2F/CTrD86+SRnf89wDILeA X-Received: by 2002:a05:6a21:999a:b0:15e:b8a1:57b9 with SMTP id ve26-20020a056a21999a00b0015eb8a157b9mr879103pzb.24.1699406950429; Tue, 07 Nov 2023 17:29:10 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1699406950; cv=pass; d=google.com; s=arc-20160816; b=tADsVReTYrxznydqwSDhceo0UmsgYQ/IZ9yJAnclxDT494lQC83UjUdQIAC+AVcLDV Ka4neftGDQRMw32Hvileu0qoHuFqtt9IvQ8dZE4HgN0w9D7+y1uKL54DS08J50AiLoho UPJzzZJc8p6Yxpzg7WJp+C2qJy46pVmsCboJBcI47EQ02nHNzQ7sTad0XvzWnObAf43O 84lCM5uvruF4DgJSOYoI+AH65YdrsrLVD6DNpQhIPtjfUd/tFlk19I8KMnexAdquna/E mXB5kPMLxtJrdDAVP9dRNO42BhjjgbkNVQ4CzlExva7qihfd8cQpUd5oNEd3TmVMuq5a sQZA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :message-id:date:subject:cc:to:from:dkim-signature:dkim-signature; bh=mM24EpShFpk88nOlYIOCr/oMAUhnexH0+KpVjZWSBOY=; fh=zDCnk6NGs4Tvyj5IpoHkoLTbv5T8GUrDTwxNtwXpMQI=; b=t+dPLyWb+iyb2UYfpcrf2ZrErbN3UICA7WCK+qyY2Nyj0NdgGUpnZbruGgkQn68tKZ 9acVeXlOUX58hS8HISp0I5fPC31SRZT52fGTplgDCaPuT2qWp2nJo/mXSD5QKqF3i2uS xJEbt1UrybTJQsPzSXMDR0iG1GiQ8tzQxPMIk+Vd4svJTkrcPtsegAn4+wZLCh9sjp5c VmQMvRPL1uJdlEEln7uS/L1M58tv7nBQJm1lFt4QlDx+X36Pvjv8KVJSucMiLdpEjxlY yVFbPLqkd+IeVnjLvBWC/iO/+KXCUXD6mbZgieFrykkP3MMq8i/FwCdJ9ydW2YEX6b3V YA/A== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=tjFRP+nI; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=kG09kYCu; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id jy3-20020a17090342c300b001c9c83947d1si958529plb.645.2023.11.07.17.29.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 17:29:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=tjFRP+nI; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=kG09kYCu; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 8067D80EE7E2; Tue, 7 Nov 2023 17:29:02 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344361AbjKHB2v (ORCPT <rfc822;lhua1029@gmail.com> + 32 others); Tue, 7 Nov 2023 20:28:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344383AbjKHB2r (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 7 Nov 2023 20:28:47 -0500 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1799210EC; Tue, 7 Nov 2023 17:28:45 -0800 (PST) Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3A7LJwDJ004992; Wed, 8 Nov 2023 01:28:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : content-transfer-encoding : content-type : mime-version; s=corp-2023-03-30; bh=mM24EpShFpk88nOlYIOCr/oMAUhnexH0+KpVjZWSBOY=; b=tjFRP+nIDJ1NjLo/shAQFtIEjlIxRBT88LikAUSOGGa1TEVGcJHnLq0brnGDz180FtnM ohZ1J6eCZVTUMwm6jlbyac3rpY1Gl4/XZ8L+g4eSP0HkqHiidgdaHkQny/SHP+DqHIeG PnL+nlqa4Kkwh0hSZmOXsNHzD2bQNO6cCTodVjVNf4IC9nAw38rfpKpttvrNfk+2AOSf 5pWNdVYrY0nsctNKZueFhft7E0Bdse/CX2tfaYzvpNQ80C2dcOb+ytn6h1+XXApeOqEE 3jrI4JTXQoVcLS2abDTqJHhbb2QscH5RNuHquQRMxggJ0zeZgY4jvdDegCj+Tapt7VMr cQ== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3u7w22gb9j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 08 Nov 2023 01:28:40 +0000 Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 3A81DstX000370; Wed, 8 Nov 2023 01:28:40 GMT Received: from nam04-bn8-obe.outbound.protection.outlook.com (mail-bn8nam04lp2040.outbound.protection.outlook.com [104.47.74.40]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3u7w1vgges-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 08 Nov 2023 01:28:39 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZLE4S4qrBwFnWeiLtx5Ll9T8Eyman2Sh4wOdx/p2kH3iLh+abazKRvB4Jdtf+KJf+N6YXmdbnlUdZBjnJ+ZxePgoSPJReWZ8i9pyE17OwXQ5jp3pZZFvcWNOCr+o/WyyFr4seFQuTF8406bnzG+y5Xnr5a2gcIN9hDvNfQXZbfDxzm/FvYO0b8vSxAiJVI2X1if3ZCnL/UMt9OKrl7MCNtB/3O/iwE5pdCRZjNqmQmvKYZ2txDy1zWPw9EExo0c4YeJYaUxd6uGKu6EiT+3gW7YzQgvO4QImHniiR5EE3VP6mTAQu7bwsGxftbXQRWhl+oILQqw3e8YnC1SL+qZtZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mM24EpShFpk88nOlYIOCr/oMAUhnexH0+KpVjZWSBOY=; b=b9j6CTP/OyxxH14AauJN5bbcGLJyIbbss9ZFNW2VzLGaTEvpcglIbdSXmWkrAhywL74jxSDQiPV6PruAffPmVVMAoU7R/QmDX7w5B7dw1b+PbZYCbqkKvnGeYMoC8HDcMnlaQKjM5zkD4e5z2naEXTXGLLDURzWlPGCmhtXvMwDVGQpebM/u/4SOot3q/ZOm/4DiYROb/3+mxTPHFYYBhdH+bY8UN4lXgx2IWrQeivwxU3Xkl0MUeE0IYPO0wRRPIQbf1Zt4dvJc/P4J3x8wag85kdvvUzY9cQuDoHZIKr1PBmyrDEHMVnW/jDe8ghC4ucvmZJr/rsmHYd4h5pObxQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mM24EpShFpk88nOlYIOCr/oMAUhnexH0+KpVjZWSBOY=; b=kG09kYCuiE1zqacZ88C5N0dxO6FtDoBM6cFWSGdU9UaYqa6xiMxu//QHvccmRgS9xd3bvSqFWuiwsb3/8805x87s/gFSYY+KxOYBbV4OlMsfpHirEBS9NrjpMg/yUxnVLTLz+3PI2lA4lxAOo8hsBkGbm18eanzT9jebUtb7qhM= Received: from SJ0PR10MB4752.namprd10.prod.outlook.com (2603:10b6:a03:2d7::19) by CO1PR10MB4708.namprd10.prod.outlook.com (2603:10b6:303:90::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.28; Wed, 8 Nov 2023 01:28:29 +0000 Received: from SJ0PR10MB4752.namprd10.prod.outlook.com ([fe80::915a:381e:a853:cf7f]) by SJ0PR10MB4752.namprd10.prod.outlook.com ([fe80::915a:381e:a853:cf7f%3]) with mapi id 15.20.6954.028; Wed, 8 Nov 2023 01:28:29 +0000 From: Junxiao Bi <junxiao.bi@oracle.com> To: linux-kernel@vger.kernel.org Cc: linux-raid@vger.kernel.org, tj@kernel.org, jiangshanlai@gmail.com, song@kernel.org, junxiao.bi@oracle.com Subject: [RFC] workqueue: allow system workqueue be used in memory reclaim Date: Tue, 7 Nov 2023 17:28:21 -0800 Message-Id: <20231108012821.56104-1-junxiao.bi@oracle.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: BY3PR10CA0008.namprd10.prod.outlook.com (2603:10b6:a03:255::13) To SJ0PR10MB4752.namprd10.prod.outlook.com (2603:10b6:a03:2d7::19) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB4752:EE_|CO1PR10MB4708:EE_ X-MS-Office365-Filtering-Correlation-Id: e0d3dbac-8acf-4845-6df6-08dbdffa0361 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6FPjgUp7YhF3m+ntQvNC2Lu3LsmHEiVKnHkQVpqcP49vvKMrwhWybksA5oYz43gzdfDOd+zhwKsjbj3fBAzAl32X1tGNIoJiwycnot6yDSyEO7wbI/2n5Y6iYudjomCplW+D4aZQQ6snOf0gGyDS15asRlc3BH52dGgFEKAmTWMb60bd6oVIppsQ+xqTuY5Dmuurs1JiX+E8nToCVIKU80K2xTVZ3yQkC2qLIhgUQ9/xjS2p/uioung+6Ki8fV84RqDe7Ww77cSsZoFSSkcn33Ro/dFs62/1h5+uQBXxIudTtGv8j36MZ10V6gK93DxFuL/9uj6fCB1sn9XNCHAuHXiirQHYbu2ZB3j9Qp+L5LpthdfpMnoVbEEQEQ4yZAPGG4/ZVi1rvyKkO0j71fuScQv8ZycTiOg/3X2GbbExAMyU2uKKyedd3sVJkxCj23cmtaqvu8n/KEICU3O6XjE0JwIAd06fhu7xPA5cHsfGT9Oxq18Lx9ewKXikAn9ep+LjsOLndmFQRa5zE5kUFj/VQqokBiOj05f+dueVL7mpw2bQGtgTTGO5K2JTx6lBW4kx X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB4752.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(396003)(39860400002)(366004)(376002)(346002)(136003)(230922051799003)(1800799009)(451199024)(64100799003)(186009)(6486002)(8936002)(107886003)(83380400001)(26005)(2616005)(1076003)(38100700002)(44832011)(316002)(5660300002)(2906002)(41300700001)(8676002)(6916009)(4326008)(6666004)(66556008)(478600001)(6512007)(6506007)(66946007)(36756003)(86362001)(66476007);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: gP2V83EtPvaHdLtMzSFPb+kC87NWBMK/evDGIYjU0EomuGvSSx9MRbTO/t8VdG6Kj71MIKUURidaCKVOrw61oFXp7vIxeNQwToGEP4FUQ7YnVusv4X5CYSTtjICrZPy5LTi9sRrZzSw2p06Nt/6qXcGR1Q9LxbLDQYLaWyxbjjPbzU386YNpxrQzZaPycvs6Bj/V7hjvgodeTIlkQwQbWO1Ybvm5TahPdbHjaIsXOK01p9tMK4lQrfyEz55y3ey3WJUEJv+/Qbb4QfGKS2NVoqwBUubLELtfqkn2NUIgEq8SA5Nod9hii9AWbI5h0wUHdmTvsDFopncHCVxhcJxF8GSi+b9Nx8mUBa15IgXAuxpG+R5ZQCW2eVf4CDBJez97ch3tkd79A9fcuANJaDYcJWdfwBVPPs0Wx1sB4Riau4M1e+RYQRG1anDkituWm7sE3pWkReS5HHtPg8RILnzRFDIRxEKt0lqG3RAKwjkukhjynci/VzqotCPXjMJT1H+JXKboMYwIgyO3SG9nlJ/ywcTORlMMMSnbMlkSndF2U7T5w3iXvGpKZ+bfEpU8w9Gpr9F5riAy/mNonsanvjd7vOjdTfB0y6pUsEHdwURIJrjr+HrmgqWkfbYvc30Q/vBU4Y7q2UeW0qDixvCB9g30z2COVQR1Mqr1k/+lwOcbKQSSVDvEyf9svMU26NmD7vsDGq7coxKog9uOi5Fm1ZovzatoIHSgWiPNO3f7WLKhRrtCn7cTs+kJuS2VWP/AmMpI7fVTyg8qMObspMLj1KCXtKnntiM22p00qeQn/Ay1qy3oh5XFIC5PwCPX1XYrpc/yGWs+Wc6SvY7JF6VaSPfT2s9UGNzvmGXEXqVoUiN1Q5HvKeQSKNXYowp4ZMpX7X+KbA0LQ8ZypwMToYEE+w4HHNg7UQPTGJrbhacWUfc9qml64qxAKcMeIiM+QOJSnLNICo9y7aVbCdXz73cUZmLtOX2xtVUcE/eb7pkmIzgeMWoIg7sLxgBj3cjSwRC8RbID+l0H8mkzRDedXU1qC1/pINWC3Jkis9y5qxwRYFJrRUfvy4czJ2rYs9daUMuqupdm4c+o4dnMMjCEQy+dFZXkslA9/iQnu3LqhcUyA9oDHQbNr2p63pBHdhPoOx8LVeUMxKNiRs4k6CMz1l5Vwo6I+CooIc0uJKEncAd7mZYV43Fb22SANGjDmodPbRnqkhCOU/zwWrJXuZvT4TTKjYt1qsqnH3mZ4h/J0sEyYzeNDovsLmR0E9R2O9VUDz2MoQxL5cGVjVTsqulD+ZzxLsVmSfrKL6S5NJReQgZacEmUCPHN7OIOC3LVwpsXU7f8hRB+OWl20QRnsK4gUlspsGYscLfNLkN/MoRPfl1ma+CXT/u/Z95qXG2EJqaZu1MffJakBL8RISH7nSDg7E1lsybfQLB5f87PJ5AuQ4cUJ5sGXcrvjVXOZK/fUhTijmDrF4RBXirqMMQfW6BYdJeqB3C8RNNhGejPAQ77BcuZM87bAco7yMbwX5ilmhPj8IsoZrDP0VqCXiyJZcl9H+hVTJZuTZuKwaaq1QfFkGUvYksC4Yq4wVK0kD0FHewE3g4a4I3k X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 9xC0cXBuYoflfCabmkwm6D+HqnSElcqNdJZxjI48gBrU3+2tnG33dV1YBRbERZ4k/ZL223uZ0gDtFqK/ygoP8j8yi+lKP4cDBOiuD1x6UJYbUvA7qorXvB7S8KlmzmWR9k41fV/Sg7T4kNk/k1TxlzOmDcjWReHF1EOyQ2TK2pqjKaKUi/v07WktI3++iv0t/UdaJmBQzHDbz8F+kpgh0zMXuE3GBesbsixt8Z0FwHvJq+8giWYboUOQZBh4PpcC8iUFt/yWzeqdKsBsFt232YO2+w9XqZJM6KjEd3FqZM1sr794DIPCOyG/Yf1gKfoqn5uUqegQh8lXQ5S+xfS2ckFBDMOK3zVKotpQgIW6+hzuBT/IvujVpGMHS2uy4RNGAiSodS1nyzHKtrCI8pSVYYVQV8lSeFlFC3Lsw6FaZbWZwRKdtHJSjai/jihVge1duCJev0i/74Qo5SfMQ5zH4GztDNInneBosoNKYfHpsyXOT7BKx+pDCGwa/NiSkj/aV6xU++8W+wlG7YUa8HqxAdKxbvXCCtqX8Y0lXStpFKQNVHnBK7LsRCYZ1Q1f//R9tYdMCDWt7BcoXEVcUleFFDBJMPdac7xm/vpExkNNl1GGmctyaVGSfGKUci1u2VMutkBISoQrw65bg/rV+23PonLbng1KKLqXGzH5nETYJbpEVbPmwn6+bCaK+nuMEcSkixPRoFVa35MD3ayj1DnHt8ztYwbjEalV2kvzSt045tqn90daihO09l7d4cX+qfv1vHmIvHHkq0BvjhZOQXATGepEScwI5KSiMPPyzzqXgl32PV0nmKonoP2q01h3HHiP X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: e0d3dbac-8acf-4845-6df6-08dbdffa0361 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB4752.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Nov 2023 01:28:29.6512 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: MyGRFAdTd/tEg3/4nXflvEJsqm567F8sqlnJrw4/YvNLJwXrfX5ApE683Ilgee/pRVSR4hbbEOTfs50dp/5XqA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR10MB4708 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-11-08_01,2023-11-07_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 malwarescore=0 suspectscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311060000 definitions=main-2311080011 X-Proofpoint-ORIG-GUID: llof-xG6e9BgWK6ZqV8X3fgDgo95Cq2F X-Proofpoint-GUID: llof-xG6e9BgWK6ZqV8X3fgDgo95Cq2F Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Tue, 07 Nov 2023 17:29:02 -0800 (PST) X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781957342834036964 X-GMAIL-MSGID: 1781957342834036964 |
Series |
[RFC] workqueue: allow system workqueue be used in memory reclaim
|
|
Commit Message
Junxiao Bi
Nov. 8, 2023, 1:28 a.m. UTC
The following deadlock was triggered on Intel IMSM raid1 volumes.
The sequence of the event is this:
1. memory reclaim was waiting xfs journal flushing and get stucked by
md flush work.
2. md flush work was queued into "md" workqueue, but never get executed,
kworker thread can not be created and also the rescuer thread was executing
md flush work for another md disk and get stuck because
"MD_SB_CHANGE_PENDING" flag was set.
3. That flag should be set by some md write process which was asking to
update md superblock to change in_sync status to 0, and then it used
kernfs_notify to ask "mdmon" process to update superblock, after that,
write process waited that flag to be cleared.
4. But "mdmon" was never wake up, because kernfs_notify() depended on
system wide workqueue "system_wq" to do the notify, but since that
workqueue doesn't have a rescuer thread, notify will not happen.
Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
---
kernel/workqueue.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
Hello, On Tue, Nov 07, 2023 at 05:28:21PM -0800, Junxiao Bi wrote: > The following deadlock was triggered on Intel IMSM raid1 volumes. > > The sequence of the event is this: > > 1. memory reclaim was waiting xfs journal flushing and get stucked by > md flush work. > > 2. md flush work was queued into "md" workqueue, but never get executed, > kworker thread can not be created and also the rescuer thread was executing > md flush work for another md disk and get stuck because > "MD_SB_CHANGE_PENDING" flag was set. > > 3. That flag should be set by some md write process which was asking to > update md superblock to change in_sync status to 0, and then it used > kernfs_notify to ask "mdmon" process to update superblock, after that, > write process waited that flag to be cleared. > > 4. But "mdmon" was never wake up, because kernfs_notify() depended on > system wide workqueue "system_wq" to do the notify, but since that > workqueue doesn't have a rescuer thread, notify will not happen. Things like this can't be fixed by adding RECLAIM to system_wq because system_wq is shared and someone else might occupy that rescuer thread. The flag doesn't guarantee unlimited forward progress. It only guarantees forward progress of one work item. That seems to be where the problem is in #2 in the first place. If a work item is required during memory reclaim, it must have guaranteed forward progress but it looks like that's waiting for someone else who can end up waiting for userspace? You'll need to untangle the dependencies earlier. Thanks.
On 11/9/23 10:58 AM, Tejun Heo wrote: > Hello, > > On Tue, Nov 07, 2023 at 05:28:21PM -0800, Junxiao Bi wrote: >> The following deadlock was triggered on Intel IMSM raid1 volumes. >> >> The sequence of the event is this: >> >> 1. memory reclaim was waiting xfs journal flushing and get stucked by >> md flush work. >> >> 2. md flush work was queued into "md" workqueue, but never get executed, >> kworker thread can not be created and also the rescuer thread was executing >> md flush work for another md disk and get stuck because >> "MD_SB_CHANGE_PENDING" flag was set. >> >> 3. That flag should be set by some md write process which was asking to >> update md superblock to change in_sync status to 0, and then it used >> kernfs_notify to ask "mdmon" process to update superblock, after that, >> write process waited that flag to be cleared. >> >> 4. But "mdmon" was never wake up, because kernfs_notify() depended on >> system wide workqueue "system_wq" to do the notify, but since that >> workqueue doesn't have a rescuer thread, notify will not happen. > Things like this can't be fixed by adding RECLAIM to system_wq because > system_wq is shared and someone else might occupy that rescuer thread. The > flag doesn't guarantee unlimited forward progress. It only guarantees > forward progress of one work item. > > That seems to be where the problem is in #2 in the first place. If a work > item is required during memory reclaim, it must have guaranteed forward > progress but it looks like that's waiting for someone else who can end up > waiting for userspace? > > You'll need to untangle the dependencies earlier. Make sense. Thanks a lot for the comments. > > Thanks. >
Hello, kernel test robot noticed "WARNING:possible_circular_locking_dependency_detected" on: commit: c8c183493c1dcc874a9d903cb6ba685c98f6c12a ("[RFC] workqueue: allow system workqueue be used in memory reclaim") url: https://github.com/intel-lab-lkp/linux/commits/Junxiao-Bi/workqueue-allow-system-workqueue-be-used-in-memory-reclaim/20231108-093107 base: https://git.kernel.org/cgit/linux/kernel/git/tj/wq.git for-next patch link: https://lore.kernel.org/all/20231108012821.56104-1-junxiao.bi@oracle.com/ patch subject: [RFC] workqueue: allow system workqueue be used in memory reclaim in testcase: boot compiler: gcc-12 test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G (please refer to attached dmesg/kmsg for entire log/backtrace) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <oliver.sang@intel.com> | Closes: https://lore.kernel.org/oe-lkp/202311161556.59af3ec9-oliver.sang@intel.com [ 6.524239][ T9] WARNING: possible circular locking dependency detected [ 6.524787][ T9] 6.6.0-rc6-00056-gc8c183493c1d #1 Not tainted [ 6.525271][ T9] ------------------------------------------------------ [ 6.525606][ T9] kworker/0:1/9 is trying to acquire lock: [ 6.525606][ T9] ffffffff88f6f480 (cpu_hotplug_lock){++++}-{0:0}, at: vmstat_shepherd (include/linux/find.h:63 mm/vmstat.c:2025) [ 6.525606][ T9] [ 6.525606][ T9] but task is already holding lock: [ 6.525606][ T9] ffff888110aa7d88 ((shepherd).work){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2606) [ 6.525606][ T9] [ 6.525606][ T9] which lock already depends on the new lock. [ 6.525606][ T9] [ 6.525606][ T9] the existing dependency chain (in reverse order) is: [ 6.525606][ T9] [ 6.525606][ T9] -> #2 ((shepherd).work){+.+.}-{0:0}: [ 6.525606][ T9] __lock_acquire (kernel/locking/lockdep.c:5136) [ 6.525606][ T9] lock_acquire (kernel/locking/lockdep.c:467 kernel/locking/lockdep.c:5755) [ 6.525606][ T9] process_one_work (arch/x86/include/asm/atomic.h:23 include/linux/atomic/atomic-arch-fallback.h:444 include/linux/jump_label.h:260 include/linux/jump_label.h:270 include/trace/events/workqueue.h:82 kernel/workqueue.c:2629) [ 6.525606][ T9] worker_thread (kernel/workqueue.c:2697 kernel/workqueue.c:2784) [ 6.525606][ T9] kthread (kernel/kthread.c:388) [ 6.525606][ T9] ret_from_fork (arch/x86/kernel/process.c:153) [ 6.525606][ T9] ret_from_fork_asm (arch/x86/entry/entry_64.S:312) [ 6.525606][ T9] [ 6.525606][ T9] -> #1 ((wq_completion)events){+.+.}-{0:0}: [ 6.525606][ T9] __lock_acquire (kernel/locking/lockdep.c:5136) [ 6.525606][ T9] lock_acquire (kernel/locking/lockdep.c:467 kernel/locking/lockdep.c:5755) [ 6.525606][ T9] start_flush_work (kernel/workqueue.c:3383) [ 6.525606][ T9] __flush_work (kernel/workqueue.c:3406) [ 6.525606][ T9] schedule_on_each_cpu (kernel/workqueue.c:3668 (discriminator 3)) [ 6.525606][ T9] rcu_tasks_one_gp (kernel/rcu/rcu.h:109 kernel/rcu/tasks.h:587) [ 6.525606][ T9] rcu_tasks_kthread (kernel/rcu/tasks.h:625 (discriminator 1)) [ 6.525606][ T9] kthread (kernel/kthread.c:388) [ 6.525606][ T9] ret_from_fork (arch/x86/kernel/process.c:153) [ 6.525606][ T9] ret_from_fork_asm (arch/x86/entry/entry_64.S:312) [ 6.525606][ T9] [ 6.525606][ T9] -> #0 (cpu_hotplug_lock){++++}-{0:0}: [ 6.525606][ T9] check_prev_add (kernel/locking/lockdep.c:3135) [ 6.525606][ T9] validate_chain (kernel/locking/lockdep.c:3254 kernel/locking/lockdep.c:3868) [ 6.525606][ T9] __lock_acquire (kernel/locking/lockdep.c:5136) [ 6.525606][ T9] lock_acquire (kernel/locking/lockdep.c:467 kernel/locking/lockdep.c:5755) [ 6.525606][ T9] cpus_read_lock (include/linux/percpu-rwsem.h:53 kernel/cpu.c:489) [ 6.525606][ T9] vmstat_shepherd (include/linux/find.h:63 mm/vmstat.c:2025) [ 6.525606][ T9] process_one_work (kernel/workqueue.c:2635) [ 6.525606][ T9] worker_thread (kernel/workqueue.c:2697 kernel/workqueue.c:2784) [ 6.525606][ T9] kthread (kernel/kthread.c:388) [ 6.525606][ T9] ret_from_fork (arch/x86/kernel/process.c:153) [ 6.525606][ T9] ret_from_fork_asm (arch/x86/entry/entry_64.S:312) [ 6.525606][ T9] [ 6.525606][ T9] other info that might help us debug this: [ 6.525606][ T9] [ 6.525606][ T9] Chain exists of: [ 6.525606][ T9] cpu_hotplug_lock --> (wq_completion)events --> (shepherd).work [ 6.525606][ T9] [ 6.525606][ T9] Possible unsafe locking scenario: [ 6.525606][ T9] [ 6.525606][ T9] CPU0 CPU1 [ 6.525606][ T9] ---- ---- [ 6.525606][ T9] lock((shepherd).work); [ 6.525606][ T9] lock((wq_completion)events); [ 6.525606][ T9] lock((shepherd).work); [ 6.525606][ T9] rlock(cpu_hotplug_lock); [ 6.525606][ T9] [ 6.525606][ T9] *** DEADLOCK *** [ 6.525606][ T9] [ 6.525606][ T9] 2 locks held by kworker/0:1/9: [ 6.525606][ T9] #0: ffff88810007cd48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2603) [ 6.525606][ T9] #1: ffff888110aa7d88 ((shepherd).work){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2606) [ 6.525606][ T9] [ 6.525606][ T9] stack backtrace: [ 6.525606][ T9] CPU: 0 PID: 9 Comm: kworker/0:1 Not tainted 6.6.0-rc6-00056-gc8c183493c1d #1 [ 6.525606][ T9] Workqueue: events vmstat_shepherd [ 6.525606][ T9] Call Trace: [ 6.525606][ T9] <TASK> [ 6.525606][ T9] dump_stack_lvl (lib/dump_stack.c:107) [ 6.525606][ T9] check_noncircular (kernel/locking/lockdep.c:2187) [ 6.525606][ T9] ? print_circular_bug (kernel/locking/lockdep.c:2163) [ 6.525606][ T9] ? stack_trace_save (kernel/stacktrace.c:123) [ 6.525606][ T9] ? stack_trace_snprint (kernel/stacktrace.c:114) [ 6.525606][ T9] check_prev_add (kernel/locking/lockdep.c:3135) [ 6.525606][ T9] validate_chain (kernel/locking/lockdep.c:3254 kernel/locking/lockdep.c:3868) [ 6.525606][ T9] ? check_prev_add (kernel/locking/lockdep.c:3824) [ 6.525606][ T9] ? hlock_class (arch/x86/include/asm/bitops.h:228 arch/x86/include/asm/bitops.h:240 include/asm-generic/bitops/instrumented-non-atomic.h:142 kernel/locking/lockdep.c:228) [ 6.525606][ T9] ? mark_lock (kernel/locking/lockdep.c:4655 (discriminator 3)) [ 6.525606][ T9] __lock_acquire (kernel/locking/lockdep.c:5136) [ 6.525606][ T9] lock_acquire (kernel/locking/lockdep.c:467 kernel/locking/lockdep.c:5755) [ 6.525606][ T9] ? vmstat_shepherd (include/linux/find.h:63 mm/vmstat.c:2025) [ 6.525606][ T9] ? lock_sync (kernel/locking/lockdep.c:5721) [ 6.525606][ T9] ? debug_object_active_state (lib/debugobjects.c:772) [ 6.525606][ T9] ? __cant_migrate (kernel/sched/core.c:10142) [ 6.525606][ T9] cpus_read_lock (include/linux/percpu-rwsem.h:53 kernel/cpu.c:489) [ 6.525606][ T9] ? vmstat_shepherd (include/linux/find.h:63 mm/vmstat.c:2025) [ 6.525606][ T9] vmstat_shepherd (include/linux/find.h:63 mm/vmstat.c:2025) [ 6.525606][ T9] process_one_work (kernel/workqueue.c:2635) [ 6.525606][ T9] ? worker_thread (kernel/workqueue.c:2740) [ 6.525606][ T9] ? show_pwq (kernel/workqueue.c:2539) [ 6.525606][ T9] ? assign_work (kernel/workqueue.c:1096) [ 6.525606][ T9] worker_thread (kernel/workqueue.c:2697 kernel/workqueue.c:2784) [ 6.525606][ T9] ? __kthread_parkme (kernel/kthread.c:293 (discriminator 3)) [ 6.525606][ T9] ? schedule (arch/x86/include/asm/bitops.h:207 (discriminator 1) arch/x86/include/asm/bitops.h:239 (discriminator 1) include/linux/thread_info.h:184 (discriminator 1) include/linux/sched.h:2255 (discriminator 1) kernel/sched/core.c:6773 (discriminator 1)) [ 6.525606][ T9] ? process_one_work (kernel/workqueue.c:2730) [ 6.525606][ T9] kthread (kernel/kthread.c:388) [ 6.525606][ T9] ? _raw_spin_unlock_irq (arch/x86/include/asm/irqflags.h:42 arch/x86/include/asm/irqflags.h:77 include/linux/spinlock_api_smp.h:159 kernel/locking/spinlock.c:202) The kernel config and materials to reproduce are available at: https://download.01.org/0day-ci/archive/20231116/202311161556.59af3ec9-oliver.sang@intel.com
diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 6e578f576a6f..e3338e3be700 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -6597,7 +6597,7 @@ void __init workqueue_init_early(void) ordered_wq_attrs[i] = attrs; } - system_wq = alloc_workqueue("events", 0, 0); + system_wq = alloc_workqueue("events", WQ_MEM_RECLAIM, 0); system_highpri_wq = alloc_workqueue("events_highpri", WQ_HIGHPRI, 0); system_long_wq = alloc_workqueue("events_long", 0, 0); system_unbound_wq = alloc_workqueue("events_unbound", WQ_UNBOUND,