Message ID | 20230518000920.191583-1-michael.christie@oracle.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp137353vqo; Wed, 17 May 2023 17:16:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5rdUg0y3R5hqy2mwWmVzF+TVfqhOOj0AZ9QM5UHTPX7ZUFSspRHanX2qcjtdV1uESLIjbf X-Received: by 2002:a17:90a:aa02:b0:250:40f5:6838 with SMTP id k2-20020a17090aaa0200b0025040f56838mr603723pjq.30.1684368991632; Wed, 17 May 2023 17:16:31 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1684368991; cv=pass; d=google.com; s=arc-20160816; b=T5w2tFXUePPpWX/J2SeJU+ro6ZmEhqqS7ULLPmzbrDPU/ZKuKBCgLAVnrtJjKTni49 4M73ccyFKm3+uiC/j0L2EgsPg7fSR49YdObpz3SYu48GSMbjAoIAfsivrRLQ/DhAKsQg tx1BeH6CWrWIow14f+MldWNeIJ61rkwb5IqfJ5NIsWZwFaIWxYpB7pmAmIbOFc4WZ6pK ayDf836h55Ykz67JPet9K27ilQBw0Z4dWHBPDyPZqvb+L9Cw60cvj4ypbRVj81BYnsN1 0tpYuCsWAYTt4wGFt0lvOKywJsc/5uk9dzdJSj5EUh54kt+WgICypL0JE142OYuK5gYF uWvg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :message-id:date:subject:to:from:dkim-signature:dkim-signature; bh=eaqwLOQJsygZyfR1hC+wW3Q7iczy5Nixp+n0ExYsKzk=; b=BABXS6PKy31+A2fL0/hkm2wFp/8q3RSqBhPZUCVUqIdAY0Gb71Ns5i9ixc5APbSZCz 2z8JrY2MEAfi2iTC20w+667vxXpx7JVFVRAE7VsndgN6G31DS6GEuFzJHqNm91Ow3oD/ o4bk4nfMUVtLcB9/Q6OZ4mFuuNe43LNEat6XkZcBc9tCaCd1wqzBm7bAzbcAv14AifNU Keg5dwcei+L7T5ZWgYsGm7ub2Z0rGmMWeCa8Xx5EsyuqjC7XKScoA053UKs0Jo9lQxnx e0J389RQ6Dgl10YvaVKoKa1bZuSXbXDToWr+JaVG/QG/5sErvLM+dwRW48TSZ4Ztr/Te oJng== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=IjYEqpou; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=R7IRwFyz; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q14-20020a638c4e000000b0052cad19f77dsi97545pgn.0.2023.05.17.17.16.19; Wed, 17 May 2023 17:16:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=IjYEqpou; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=R7IRwFyz; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229736AbjERAJt (ORCPT <rfc822;abdi.embedded@gmail.com> + 99 others); Wed, 17 May 2023 20:09:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229571AbjERAJs (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 17 May 2023 20:09:48 -0400 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 981C83AAD for <linux-kernel@vger.kernel.org>; Wed, 17 May 2023 17:09:42 -0700 (PDT) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34HIGr1C002400; Thu, 18 May 2023 00:09:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : content-transfer-encoding : content-type : mime-version; s=corp-2023-03-30; bh=eaqwLOQJsygZyfR1hC+wW3Q7iczy5Nixp+n0ExYsKzk=; b=IjYEqpouR4pXzCmcj5bqdggzyQs8wdnNE6I4AKhmdl5LnIwb3Jhp5IyYZXY5uZYPv8ky Nakqw3OJvHfOVgAcoC4lFu3Wi4RLro+NJ0fAI7NhNFkQOjZGiu+KSsHxaoSC6Jr3Qvoq NaPmgxTeTV426b7gDfZh+V5YmyZmwIvww5TlAO4/s3erx/L4u0YY+pY4R8qPN3bUOP16 +thI/PqGEJsv0A+o5CNDJ/a9jW/9IQVgZ7DXiWsY1Gu6x8ZiLJJu7pxWd0jS8RARkR1O pUnoL2BGshxra+NqEWj88bHQjV9up1EKsoeoGZ4Cu3sfAOkziYfohkZcy9kdPc7mvaiM vQ== Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta02.appoci.oracle.com [147.154.114.232]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3qmxps1dr6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 18 May 2023 00:09:26 +0000 Received: from pps.filterd (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 34HMrAxq025064; Thu, 18 May 2023 00:09:25 GMT Received: from nam11-co1-obe.outbound.protection.outlook.com (mail-co1nam11lp2170.outbound.protection.outlook.com [104.47.56.170]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3qj1063mxe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 18 May 2023 00:09:25 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EXnQWewdWKpQ4b5lmP9gtKS1edi2QDlBcJA4Bmhhl9q6LgPOeCDlqVXLH609fUl+UOw4BylIXPwx9TqzDTosIW2oTCz41mjE/cC02PE/V/NEczhiHjoPJEyr+qHdMQS4SXyXT77RYVuU2y00VseR/8/aTmyJMwajEuekKMj7aWU20NrAPiZtk9pE5pfL7EgH4p9oC6rf9ZJ43SfGh8NroC6Bvhr+AWF2qfDRFIbbN+69jv3XDOb5uWIPky/0eAI+ge34TuJKBWIV6xPRUh6qpYaCWvqvFL8TW0Ww0MW5kCTgWLspKfjW+vd8BE7hU1ooqCrftHxVZ8+7DxEe3DsXtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=eaqwLOQJsygZyfR1hC+wW3Q7iczy5Nixp+n0ExYsKzk=; b=l1Rk0tx3QZWw8+yLhh2yR4FpbIudqpXkKpErjBL+DPJlIvAwE3ZN8JmXORgtCbB6xeUiLixGgnk59NY/zczYWbNF6N3sZEcGjbJEGNUAop+rKvQ2/hLvyWt8+6iUgJDJruz76yqS+jM9jM975S4sJK50gICwPSB7SPRTVa0bROl4HC+5tDDeYh5+Vc7hNyQDBQK0MdUU8PRJ3Wceglmgev1YNfmz5GQNkSFbOWHp+ApzjkcYV3Ntoos7ekYYRpEj1FgR2XMm+DbwA3UgexjN/QqVAX7+ErvtyT2AwYOgbRZ6yj5rnGi7xU12PUzwwNOprM+hTZUbrChV67IsZCWS+Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eaqwLOQJsygZyfR1hC+wW3Q7iczy5Nixp+n0ExYsKzk=; b=R7IRwFyziyYbeOJPq+W/TBb/7BkNF7v5D+KtfGTmMluZB2Du8AOvEJALVrT3TxxG6EHM8lkRsKerrp9HzL74Nd2/c8Li6/vBpyclYrgqsQ3q4QnrrMu3mfgKcdvC/fItnpaySdrbGFGeHS08d3CuhSFwGYuBtX3qBiRAwMCFRHM= Received: from CY8PR10MB7243.namprd10.prod.outlook.com (2603:10b6:930:7c::10) by BLAPR10MB5332.namprd10.prod.outlook.com (2603:10b6:208:30d::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May 2023 00:09:23 +0000 Received: from CY8PR10MB7243.namprd10.prod.outlook.com ([fe80::13d6:c3f3:2447:6559]) by CY8PR10MB7243.namprd10.prod.outlook.com ([fe80::13d6:c3f3:2447:6559%5]) with mapi id 15.20.6411.017; Thu, 18 May 2023 00:09:23 +0000 From: Mike Christie <michael.christie@oracle.com> To: oleg@redhat.com, linux@leemhuis.info, nicolas.dichtel@6wind.com, axboe@kernel.dk, ebiederm@xmission.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, mst@redhat.com, sgarzare@redhat.com, jasowang@redhat.com, stefanha@redhat.com, brauner@kernel.org Subject: [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND Date: Wed, 17 May 2023 19:09:12 -0500 Message-Id: <20230518000920.191583-1-michael.christie@oracle.com> X-Mailer: git-send-email 2.25.1 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: DS7PR03CA0310.namprd03.prod.outlook.com (2603:10b6:8:2b::9) To CY8PR10MB7243.namprd10.prod.outlook.com (2603:10b6:930:7c::10) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY8PR10MB7243:EE_|BLAPR10MB5332:EE_ X-MS-Office365-Filtering-Correlation-Id: ac27967d-6011-4e48-5a9d-08db57342277 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PuvzoytDAYvACc9ypcpZ5uAsGgmI8UVpkPOQ5/vxeos/I+0V0FoOj0SSmFJqZMry0GF2UR/44Hhl8odACw7RBCfFh/puc6HPudmDv8CCsOYlfII5vYLN4NYE1bHNaBhA6Ra/9p91WMbLO1YVgeQPex2arYaczmRIdPWVgt1MqUVxJNbAI4jdn7OSdOt06w9ypnvd1z7X9q0DpWZSED9n4qe7mzy0RTnWIlMYeOj95Y9pXx9bZZhakwQg0RFoEFe7nkZTcyU+wlCiR4TmRFciMaKlfJ0Pnw1JMYj6JsltuZU6ajDgh/LJE14kPSdH9fj0l0cWaSkO6cS2fmqUhuu0IWpkpQxiu7Q0E/3j3DWAlJgHjquooX3l+9eGh4wSB5nSzW0DeRKojG/oL/F5cPWCGYm5cMb5Uz6MfiigEa6d5EE6u8j3mIqvX9iDpxBBhdEz90S33fI0UtNmtmP9HbopByd1J9RO7HdT16MlbnQrAuXch8oWogBYNafI9qijRZ57Va9332PJmJzVbG51u+81+764/660qLUpnQodXCsd6hBdDK3+/RXi7sq41QDdmsWhGJKhk4Qmua+3nhQaxXLdETv7DfDUdTADWEwsXWoSr8I= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY8PR10MB7243.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199021)(6666004)(6486002)(478600001)(66946007)(66556008)(66476007)(316002)(41300700001)(5660300002)(186003)(1076003)(8676002)(8936002)(26005)(7416002)(6512007)(6506007)(36756003)(83380400001)(2616005)(2906002)(4744005)(38100700002)(921005)(86362001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: QNHN7HZkmdn7cwBQDBbXfxACtrcpvzEysn9c/0VTX2/eg7WzLVpLCbaV2wg9LanQeW2o1FQkpxkcPj6Afr/xCrpoODPTrEqWV0VJWVKd/m2J3XjZK+diXSi4DPVT3dgiEVqq0zDcSoN4brEqlFHXpkIGRk1P3RTO3yHbabcr4YHpjEEExExP+bFoZqFbryTS0p+yJ+dgL5b4ASp+DoGQZqkmdlmMvtXgfGsfXotp6AffV7ZILBxhxGx1GmBLegi+G9beIfKtamDwG12DgeXJVVNrX1Skd3oC2uu8RFx4i7E1e+D3F0tooOYdU2Lk/lkXBbRZmjp9l10dYaHaAji4IsV6G97t26SxCjHkpWsOJ3qzOqwKrRpw9CpknwxFCKrUUXC/1SRXRcMaddBbCtHEDrFYNqbn+cxMsE2jXKraMDk7TXE9YAUt7eOK0rIXwGoLKHg9BswMKweTvGQNXVL3NyicUsiWOB+RrDtorxfNWdAKL2pwmws8hUVrxiUpU9saPcj+TdhK6NV3goe75H9kptR1nGuh9Rvt8k9ML5bgex98rS2PtbWHRRj2I5appAKEfVWBMoS5qVlvvFsYfcIDS8VZKwoXx5EFwjFwg4XyBGrvN+EJ7MBf2bEAeHSwlBfLpme3F4KqS5NeIHRe9vELz10+7gm3jUrYufCUk9oYoCBOqyVHfJFtK2yGJPyv31lNLGwzDcoOZcesnIfmc+Ia7qQbeiDie65VliG/K6KnuGGHy9e+pg8BV4NOnl0ovMm7pwSgQq8Tysmxd0g2P+IqToZU9b1SDRKELYUSfko/j7/K0KlytV/iT3iu/B3JXI8yX0pn1KZ7ZXCBeZ+6jxLhlmsbduCdXPO63OXf1FMla3phqc22BzHWHDAZLdrIOSi7bWHkbGESEzw1qnnPsAOnd+BL9iysQf1bJxMV2mGm9N2N0Jfa7GRy7iKDCXLhHyieTMKHleMBSre0VyxUF0CH3kuRJj5QNP8EgUZLf9RngplFQ2+T3jbluhE6ypl4SrOdL2ECFYV45xPKwLSAL3o+9wETl3IWHCzLZbPJcTjRD/PtEWWRiX8oKPAEKqAib8y4D9O6sNYg4D/Dg5+IWBTT6ECpI3XTaTcPxyVYb2rKHhjzwV4uH3FXxagZlxUKkQvNiQpt+gLY1YHa2CDaBG7DJ++v8TxM1eXYVGUSAdzg4CjkbFnWWSX83013arJbcsQrML4T64Q2diTpWWO2BkXTlVJ7krafJrUmMgYeIoo01YVDW98QCewdi3tjvrJhzCzUdpgvdbh53UaCmMHB/IWX7IYNdwqyA7iYag8hV2g5iqyjyiawbih5a9kwV5LEJZe4jebo+ioP9UbnhVfX2StAFJrvwI/4PM9Kq6dcriH2ROZOrlMC/IR0Y1wZzzn7+z4osTdkKEDkHpZ9zU6EojW7suOOc6REJgLqOYJCP19EEgHsIA/Xf2xgS2lGXC2zbpQZKs6nr9MtRJuUWaJtYpH+pBtYTCQbqeLQ71T94Uqxu5ypPHz1nliHGC0J7qjjm1s6Zhs1hEK/nRyDhNP4AFzPwbxxmCTJpon58R5MAAdln3qYgiwAKGXJ2ZFGypUlBC1Pw08bZvDNoKTHVUAHl3rfpA== X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: hDDT5LgrHDpXFzsqrlL+iGhqXEGV2dWWGN3yRncuBYkl2JJbiDvSPnsCt/gCBs5VyQ+c+neMZlsquuViF9hvZCMjtrmystISpdFb0ai3cJ+lzRO2SdlGcAv2e8JkqHW57heeg0sAzExnMhIf2VgrCRVrn9iof/6mAhmCk+lijb1LkvYkGh+QpUAlBrRKzQ0hteVKxuDpl1C2gujYEIsqcRwuBg9NL5V7vCKlSxQj2qhFwCwsQnVRrpVcy/mWbSNZylWZK982eEW9EWNm5JFazsagiJOS9bsLUKvqJItNZaMOkxIIfVBe1l/rROenPFUA0rIDAt1MCjjrjPXG93QAYWy12Q3SnmSZGBU0nzEAM2vnT8pJ9hMEXonkIlEiy70BzC0sUKPUfhgPt8Lb5vIfr3tSEK7CUgAwG7wGEeDwH+uCNYm+dzwcDrXWOPAnRAeS6mzQ/mRhoBxLptd1HT5Ur/127pClpVsbuBr6QeikOjyvKaMO9fuw8CZRGQfT8M7S5KN6BHSZYBRhAPcsI6yyXLDEFGqeIWpi+pG7d1wSiW/8LcmiXU/6mKDdClxQxZpZ9AG8ewBMwhtt/+8ePvKd4nTzepKYW7kcz1b9tShFZVcSME2nymGly7fiIyCK4Zf1++MneRkinvt2YR0i05dtcXOuTGP6UM97dtOgQYp7wbmJ9q3VecvCZS8prgDOlGJYMmR9Km6l5ywqgWF4C80yEYDA4N62qctHUSkbCKfpdchWIc48ZIO4GL6/xTUk8EP5TKT6Ue2yoEd9xHVngbW0N+uYAzQnfIL/6T6WcgGG2ssmdNQCvN59gYdIjrqkDPE3+Kt78NIn1gQqmyr4WXedvo0Nt4zYQrgMNS+/xSY4HsIBoK195F/QznstM0u1gOS5jMJls2u1ZdJ7spgAfN41cVbbUV7JSLUdYobG5hd8zrAihgdFHLe9hNz5XKYsDzBi3Dq8HpW3hO9lUeFq5vToOMXH6CJsXCNhOK09Hab1JVXNzPjS8ChFryOQd7Tbfs/Wsbx4EEgebUB0B0xslAr8IiuLCSkPJ+DfaQaYU97YDJXkI7gKYt2DaUJNiGHlMwV4 X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: ac27967d-6011-4e48-5a9d-08db57342277 X-MS-Exchange-CrossTenant-AuthSource: CY8PR10MB7243.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 00:09:23.3449 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Qh2XoIPa1DN3kJ0CLwjfn7rebg4gwDjCiuMb0Fa53pNskuvZdPwxS06qf80ODlLaVFxupN28XIZcGJfoDxQTj3ZR0R3T7JoxU54twns8sb8= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB5332 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-05-17_04,2023-05-17_02,2023-02-09_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 phishscore=0 mlxlogscore=477 spamscore=0 adultscore=0 malwarescore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000 definitions=main-2305170199 X-Proofpoint-GUID: ZlYnlrom5qoqhuycVxAG5fBaZ5G2gHBK X-Proofpoint-ORIG-GUID: ZlYnlrom5qoqhuycVxAG5fBaZ5G2gHBK X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766188899840136013?= X-GMAIL-MSGID: =?utf-8?q?1766188899840136013?= |
Series |
vhost_tasks: Use CLONE_THREAD/SIGHAND
|
|
Message
Mike Christie
May 18, 2023, 12:09 a.m. UTC
This patch allows the vhost and vhost_task code to use CLONE_THREAD, CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the normal testing, haven't coverted vsock and vdpa, and I know you guys will not like the first patch. However, I think it better shows what we need from the signal code and how we can support signals in the vhost_task layer. Note that I took the super simple route and kicked off some work to the system workqueue. We can do more invassive approaches: 1. Modify the vhost drivers so they can check for IO completions using a non-blocking interface. We then don't need to run from the system workqueue and can run from the vhost_task. 2. We could drop patch 1 and just say we are doing a polling type of approach. We then modify the vhost layer similar to #1 where we can check for completions using a non-blocking interface and use the vhost_task task.
Comments
On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: > This patch allows the vhost and vhost_task code to use CLONE_THREAD, > CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the > normal testing, haven't coverted vsock and vdpa, and I know you guys > will not like the first patch. However, I think it better shows what Just to summarize the core idea behind my proposal is that no signal handling changes are needed unless there's a bug in the current way io_uring workers already work. All that should be needed is s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. If you follow my proposal than vhost and io_uring workers should almost collapse into the same concept. Specifically, io_uring workers and vhost workers should behave the same when it comes ot handling signals. See https://lore.kernel.org/lkml/20230518-kontakt-geduckt-25bab595f503@brauner > we need from the signal code and how we can support signals in the > vhost_task layer. > > Note that I took the super simple route and kicked off some work to > the system workqueue. We can do more invassive approaches: > 1. Modify the vhost drivers so they can check for IO completions using > a non-blocking interface. We then don't need to run from the system > workqueue and can run from the vhost_task. > > 2. We could drop patch 1 and just say we are doing a polling type > of approach. We then modify the vhost layer similar to #1 where we > can check for completions using a non-blocking interface and use > the vhost_task task. My preference would be to do whatever is the minimal thing now and has the least bug potential and is the easiest to review for us non-vhost experts. Then you can take all the time to rework and improve the vhost infra based on the possibilities that using user workers offers. Plus, that can easily happen in the next kernel cycle. Remember, that we're trying to fix a regression here. A regression on an unreleased kernel but still.
On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: > On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: > > This patch allows the vhost and vhost_task code to use CLONE_THREAD, > > CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the > > normal testing, haven't coverted vsock and vdpa, and I know you guys > > will not like the first patch. However, I think it better shows what > > Just to summarize the core idea behind my proposal is that no signal > handling changes are needed unless there's a bug in the current way > io_uring workers already work. All that should be needed is > s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. > > If you follow my proposal than vhost and io_uring workers should almost > collapse into the same concept. Specifically, io_uring workers and vhost > workers should behave the same when it comes ot handling signals. > > See > https://lore.kernel.org/lkml/20230518-kontakt-geduckt-25bab595f503@brauner > > > > we need from the signal code and how we can support signals in the > > vhost_task layer. > > > > Note that I took the super simple route and kicked off some work to > > the system workqueue. We can do more invassive approaches: > > 1. Modify the vhost drivers so they can check for IO completions using > > a non-blocking interface. We then don't need to run from the system > > workqueue and can run from the vhost_task. > > > > 2. We could drop patch 1 and just say we are doing a polling type > > of approach. We then modify the vhost layer similar to #1 where we > > can check for completions using a non-blocking interface and use > > the vhost_task task. > > My preference would be to do whatever is the minimal thing now and has > the least bug potential and is the easiest to review for us non-vhost > experts. Then you can take all the time to rework and improve the vhost > infra based on the possibilities that using user workers offers. Plus, > that can easily happen in the next kernel cycle. > > Remember, that we're trying to fix a regression here. A regression on an > unreleased kernel but still. It's a public holiday here today so I'll try to find time to review this tomorrow.
On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: > On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: > > This patch allows the vhost and vhost_task code to use CLONE_THREAD, > > CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the > > normal testing, haven't coverted vsock and vdpa, and I know you guys > > will not like the first patch. However, I think it better shows what > > Just to summarize the core idea behind my proposal is that no signal > handling changes are needed unless there's a bug in the current way > io_uring workers already work. All that should be needed is > s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. > > If you follow my proposal than vhost and io_uring workers should almost > collapse into the same concept. Specifically, io_uring workers and vhost > workers should behave the same when it comes ot handling signals. > > See > https://lore.kernel.org/lkml/20230518-kontakt-geduckt-25bab595f503@brauner > > > > we need from the signal code and how we can support signals in the > > vhost_task layer. > > > > Note that I took the super simple route and kicked off some work to > > the system workqueue. We can do more invassive approaches: > > 1. Modify the vhost drivers so they can check for IO completions using > > a non-blocking interface. We then don't need to run from the system > > workqueue and can run from the vhost_task. > > > > 2. We could drop patch 1 and just say we are doing a polling type > > of approach. We then modify the vhost layer similar to #1 where we > > can check for completions using a non-blocking interface and use > > the vhost_task task. > > My preference would be to do whatever is the minimal thing now and has > the least bug potential and is the easiest to review for us non-vhost > experts. Then you can take all the time to rework and improve the vhost > infra based on the possibilities that using user workers offers. Plus, > that can easily happen in the next kernel cycle. > > Remember, that we're trying to fix a regression here. A regression on an > unreleased kernel but still. Just two more thoughts: The following places currently check for PF_IO_WORKER: arch/x86/include/asm/fpu/sched.h: !(current->flags & (PF_KTHREAD | PF_IO_WORKER))) { arch/x86/kernel/fpu/context.h: if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_IO_WORKER))) arch/x86/kernel/fpu/core.c: if (!(current->flags & (PF_KTHREAD | PF_IO_WORKER)) && Both PF_KTHREAD and PF_IO_WORKER don't need TIF_NEED_FPU_LOAD because they never return to userspace. But that's not specific to PF_IO_WORKERs. Please generalize this to just check for PF_USER_WORKER via a simple s/PF_IO_WORKER/PF_USER_WORKER/g in these places. Another thing, in the sched code we have hooks into sched_submit_work() and sched_update_worker() specific to PF_IO_WORKERs. But again, I don't think this needs to be special to PF_IO_WORKERS. This might be generally useful for PF_USER_WORKER. So we should probably generalize this and have a generic user_worker_sleeping() and user_worker_running() helper that figures out internally what specific helper to call. That's not something that needs to be done right now though since I don't think vhost needs this functionality. But we should generalize this for the next development cycle so we have this all nice and clean when someone actually needs this. Overall this will mean that there would only be a single place left where PF_IO_WORKER would need to be checked and that's in io_uring code itself. And if we do things just right we might not even need that PF_IO_WORKER flag anymore at all. But again, that's just notes for next cycle. Thoughts? Rotten apples?
On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: > On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: > > This patch allows the vhost and vhost_task code to use CLONE_THREAD, > > CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the > > normal testing, haven't coverted vsock and vdpa, and I know you guys > > will not like the first patch. However, I think it better shows what > > Just to summarize the core idea behind my proposal is that no signal > handling changes are needed unless there's a bug in the current way > io_uring workers already work. All that should be needed is > s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. > > If you follow my proposal than vhost and io_uring workers should almost > collapse into the same concept. Specifically, io_uring workers and vhost > workers should behave the same when it comes ot handling signals. > > See > https://lore.kernel.org/lkml/20230518-kontakt-geduckt-25bab595f503@brauner > > > > we need from the signal code and how we can support signals in the > > vhost_task layer. > > > > Note that I took the super simple route and kicked off some work to > > the system workqueue. We can do more invassive approaches: > > 1. Modify the vhost drivers so they can check for IO completions using > > a non-blocking interface. We then don't need to run from the system > > workqueue and can run from the vhost_task. > > > > 2. We could drop patch 1 and just say we are doing a polling type > > of approach. We then modify the vhost layer similar to #1 where we > > can check for completions using a non-blocking interface and use > > the vhost_task task. > > My preference would be to do whatever is the minimal thing now and has > the least bug potential and is the easiest to review for us non-vhost > experts. Then you can take all the time to rework and improve the vhost > infra based on the possibilities that using user workers offers. Plus, > that can easily happen in the next kernel cycle. > > Remember, that we're trying to fix a regression here. A regression on an > unreleased kernel but still. On Tue, May 16, 2023 at 10:40:01AM +0200, Christian Brauner wrote: > On Mon, May 15, 2023 at 05:23:12PM -0500, Mike Christie wrote: > > On 5/15/23 10:44 AM, Linus Torvalds wrote: > > > On Mon, May 15, 2023 at 7:23 AM Christian Brauner <brauner@kernel.org> wrote: > > >> > > >> So I think we will be able to address (1) and (2) by making vhost tasks > > >> proper threads and blocking every signal except for SIGKILL and SIGSTOP > > >> and then having vhost handle get_signal() - as you mentioned - the same > > >> way io uring already does. We should also remove the ingore_signals > > >> thing completely imho. I don't think we ever want to do this with user > > >> workers. > > > > > > Right. That's what IO_URING does: > > > > > > if (args->io_thread) { > > > /* > > > * Mark us an IO worker, and block any signal that isn't > > > * fatal or STOP > > > */ > > > p->flags |= PF_IO_WORKER; > > > siginitsetinv(&p->blocked, sigmask(SIGKILL)|sigmask(SIGSTOP)); > > > } > > > > > > and I really think that vhost should basically do exactly what io_uring does. > > > > > > Not because io_uring fundamentally got this right - but simply because > > > io_uring had almost all the same bugs (and then some), and what the > > > io_uring worker threads ended up doing was to basically zoom in on > > > "this works". > > > > > > And it zoomed in on it largely by just going for "make it look as much > > > as possible as a real user thread", because every time the kernel > > > thread did something different, it just caused problems. > > > > > > So I think the patch should just look something like the attached. > > > Mike, can you test this on whatever vhost test-suite? > > > > I tried that approach already and it doesn't work because io_uring and vhost > > differ in that vhost drivers implement a device where each device has a vhost_task > > and the drivers have a file_operations for the device. When the vhost_task's > > parent gets signal like SIGKILL, then it will exit and call into the vhost > > driver's file_operations->release function. At this time, we need to do cleanup > > But that's no reason why the vhost worker couldn't just be allowed to > exit on SIGKILL cleanly similar to io_uring. That's just describing the > current architecture which isn't a necessity afaict. And the helper > thread could e.g., crash. > > > like flush the device which uses the vhost_task. There is also the case where if > > the vhost_task gets a SIGKILL, we can just exit from under the vhost layer. > > In a way I really don't like the patch below. Because this should be > solvable by adapting vhost workers. Right now, vhost is coming from a > kthread model and we ported it to a user worker model and the whole > point of this excercise has been that the workers behave more like > regular userspace processes. So my tendency is to not massage kernel > signal handling to now also include a special case for user workers in > addition to kthreads. That's just the wrong way around and then vhost > could've just stuck with kthreads in the first place. > > So I'm fine with skipping over the freezing case for now but SIGKILL > should be handled imho. Only init and kthreads should get the luxury of > ignoring SIGKILL. > > So, I'm afraid I'm asking some work here of you but how feasible would a > model be where vhost_worker() similar to io_wq_worker() gracefully > handles SIGKILL. Yes, I see there's > > net.c: .release = vhost_net_release > scsi.c: .release = vhost_scsi_release > test.c: .release = vhost_test_release > vdpa.c: .release = vhost_vdpa_release > vsock.c: .release = virtio_transport_release > vsock.c: .release = vhost_vsock_dev_release > > but that means you have all the basic logic in place and all of those > drivers also support the VHOST_RESET_OWNER ioctl which also stops the > vhost worker. I'm confident that a lof this can be leveraged to just > cleanup on SIGKILL. > > So it feels like this should be achievable by adding a callback to > struct vhost_worker that get's called when vhost_worker() gets SIGKILL > and that all the users of vhost workers are forced to implement. > > Yes, it is more work but I think that's the right thing to do and not to > complicate our signal handling. > > Worst case if this can't be done fast enough we'll have to revert the > vhost parts. I think the user worker parts are mostly sane and are As mentioned, if we can't settle this cleanly before -rc4 we should revert the vhost parts unless Linus wants to have it earlier.
On 19.05.23 14:15, Christian Brauner wrote: > On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: >> On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: >>> This patch allows the vhost and vhost_task code to use CLONE_THREAD, >>> CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the >>> normal testing, haven't coverted vsock and vdpa, and I know you guys >>> will not like the first patch. However, I think it better shows what >> >> Just to summarize the core idea behind my proposal is that no signal >> handling changes are needed unless there's a bug in the current way >> io_uring workers already work. All that should be needed is >> s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. [...] >> So it feels like this should be achievable by adding a callback to >> struct vhost_worker that get's called when vhost_worker() gets SIGKILL >> and that all the users of vhost workers are forced to implement. >> >> Yes, it is more work but I think that's the right thing to do and not to >> complicate our signal handling. >> >> Worst case if this can't be done fast enough we'll have to revert the >> vhost parts. I think the user worker parts are mostly sane and are > > As mentioned, if we can't settle this cleanly before -rc4 we should > revert the vhost parts unless Linus wants to have it earlier. Meanwhile -rc5 is just a few days away and there are still a lot of discussions in the patch-set proposed to address the issues[1]. Which is kinda great (albeit also why I haven't given it a spin yet), but on the other hand makes we wonder: Is it maybe time to revert the vhost parts for 6.4 and try again next cycle? [1] https://lore.kernel.org/all/20230522025124.5863-1-michael.christie@oracle.com/ Ciao, Thorsten "not sure if I'm asking because I'm affected, or because it's my duty as regression tracker" Leemhuis
Le 01/06/2023 à 09:58, Thorsten Leemhuis a écrit : [snip] > > Meanwhile -rc5 is just a few days away and there are still a lot of > discussions in the patch-set proposed to address the issues[1]. Which is > kinda great (albeit also why I haven't given it a spin yet), but on the > other hand makes we wonder: > > Is it maybe time to revert the vhost parts for 6.4 and try again next cycle? At least it's time to find a way to fix this issue :) Thank you, Nicolas
On Thu, Jun 01, 2023 at 09:58:38AM +0200, Thorsten Leemhuis wrote: > On 19.05.23 14:15, Christian Brauner wrote: > > On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: > >> On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: > >>> This patch allows the vhost and vhost_task code to use CLONE_THREAD, > >>> CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the > >>> normal testing, haven't coverted vsock and vdpa, and I know you guys > >>> will not like the first patch. However, I think it better shows what > >> > >> Just to summarize the core idea behind my proposal is that no signal > >> handling changes are needed unless there's a bug in the current way > >> io_uring workers already work. All that should be needed is > >> s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. > [...] > >> So it feels like this should be achievable by adding a callback to > >> struct vhost_worker that get's called when vhost_worker() gets SIGKILL > >> and that all the users of vhost workers are forced to implement. > >> > >> Yes, it is more work but I think that's the right thing to do and not to > >> complicate our signal handling. > >> > >> Worst case if this can't be done fast enough we'll have to revert the > >> vhost parts. I think the user worker parts are mostly sane and are > > > > As mentioned, if we can't settle this cleanly before -rc4 we should > > revert the vhost parts unless Linus wants to have it earlier. > > Meanwhile -rc5 is just a few days away and there are still a lot of > discussions in the patch-set proposed to address the issues[1]. Which is > kinda great (albeit also why I haven't given it a spin yet), but on the > other hand makes we wonder: You might've missed it in the thread but it seems everyone is currently operating under the assumption that the preferred way is to fix this is rather than revert. See the mail in [1]: "So I'd really like to finish this. Even if we end up with a hack or two in signal handling that we can hopefully fix up later by having vhost fix up some of its current assumptions." which is why no revert was send for -rc4. And there's a temporary fix we seem to have converged on. @Mike, do you want to prepare an updated version of the temporary fix. If @Linus prefers to just apply it directly he can just grab it from the list rather than delaying it. Make sure to grab a Co-developed-by line on this, @Mike. Just in case we misunderstood the intention, I also prepared a revert at the end of this mail that Linus can use. @Thorsten, you can test it if you want. The revert only reverts the vhost bits as the general agreement seems to be that user workers are otherwise the path forward. [1]: https://lore.kernel.org/lkml/CAHk-=wj4DS=2F5mW+K2P7cVqrsuGd3rKE_2k2BqnnPeeYhUCvg@mail.gmail.com --- /* Summary */ Switching vhost workers to user workers broke existing workflows because vhost workers started showing up in ps output breaking various scripts. The reason is that vhost user workers are currently spawned as separate processes and not as threads. Revert the patches converting vhost from kthreads to vhost workers until vhost is ready to support user workers created as actual threads. The following changes since commit 7877cb91f1081754a1487c144d85dc0d2e2e7fc4: Linux 6.4-rc4 (2023-05-28 07:49:00 -0400) are available in the Git repository at: git@gitolite.kernel.org:pub/scm/linux/kernel/git/brauner/linux tags/kernel/v6.4-rc4/vhost for you to fetch changes up to b20084b6bc90012a8ccce72ef1c0050d5fd42aa8: Revert "vhost_task: Allow vhost layer to use copy_process" (2023-06-01 12:33:19 +0200) ---------------------------------------------------------------- kernel/v6.4-rc4/vhost ---------------------------------------------------------------- Christian Brauner (3): Revert "vhost: use vhost_tasks for worker threads" Revert "vhost: move worker thread fields to new struct" Revert "vhost_task: Allow vhost layer to use copy_process" MAINTAINERS | 1 - drivers/vhost/Kconfig | 5 -- drivers/vhost/vhost.c | 124 ++++++++++++++++++++------------------- drivers/vhost/vhost.h | 11 +--- include/linux/sched/vhost_task.h | 23 -------- kernel/Makefile | 1 - kernel/vhost_task.c | 117 ------------------------------------ 7 files changed, 67 insertions(+), 215 deletions(-) delete mode 100644 include/linux/sched/vhost_task.h delete mode 100644 kernel/vhost_task.c
On 01.06.23 12:47, Christian Brauner wrote: > On Thu, Jun 01, 2023 at 09:58:38AM +0200, Thorsten Leemhuis wrote: >> On 19.05.23 14:15, Christian Brauner wrote: >>> On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: >>>> On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: >>>>> This patch allows the vhost and vhost_task code to use CLONE_THREAD, >>>>> CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the >>>>> normal testing, haven't coverted vsock and vdpa, and I know you guys >>>>> will not like the first patch. However, I think it better shows what >>>> >>>> Just to summarize the core idea behind my proposal is that no signal >>>> handling changes are needed unless there's a bug in the current way >>>> io_uring workers already work. All that should be needed is >>>> s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. >> [...] >>>> So it feels like this should be achievable by adding a callback to >>>> struct vhost_worker that get's called when vhost_worker() gets SIGKILL >>>> and that all the users of vhost workers are forced to implement. >>>> >>>> Yes, it is more work but I think that's the right thing to do and not to >>>> complicate our signal handling. >>>> >>>> Worst case if this can't be done fast enough we'll have to revert the >>>> vhost parts. I think the user worker parts are mostly sane and are >>> >>> As mentioned, if we can't settle this cleanly before -rc4 we should >>> revert the vhost parts unless Linus wants to have it earlier. >> >> Meanwhile -rc5 is just a few days away and there are still a lot of >> discussions in the patch-set proposed to address the issues[1]. Which is >> kinda great (albeit also why I haven't given it a spin yet), but on the >> other hand makes we wonder: > > You might've missed it in the thread but it seems everyone is currently > operating under the assumption that the preferred way is to fix this is > rather than revert. I saw that, but that was also a week ago already, so I slowly started to wonder if plans might have/should be changed. Anyway: if that's still the plan forward it's totally fine for me if it's fine for Linus. :-D BTW: I for now didn't sit down to test Mike's patches, as due to all the discussions I assumed new ones would be coming sooner or later anyway. If it's worth giving them a shot, please let me know. > [...] Thx for the update! Ciao, Thorsten
On Thu, Jun 1, 2023 at 6:47 AM Christian Brauner <brauner@kernel.org> wrote: > > @Mike, do you want to prepare an updated version of the temporary fix. > If @Linus prefers to just apply it directly he can just grab it from the > list rather than delaying it. Make sure to grab a Co-developed-by line > on this, @Mike. Yeah, let's apply the known "fix the immediate regression" patch wrt vhost ps output and the freezer. That gets rid of the regression. I think that we can - and should - then treat the questions about core dumping and execve as separate issues. vhost wouldn't have done execve since it's nonsensical and has never worked anyway since it always left the old mm ref behind, and similarly core dumping has never been an issue. So on those things we don't have any "semantic" issues, we just need to make sure we don't do crazy things like hang uninterruptibly. Linus
On 6/1/23 5:47 AM, Christian Brauner wrote: > On Thu, Jun 01, 2023 at 09:58:38AM +0200, Thorsten Leemhuis wrote: >> On 19.05.23 14:15, Christian Brauner wrote: >>> On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: >>>> On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: >>>>> This patch allows the vhost and vhost_task code to use CLONE_THREAD, >>>>> CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the >>>>> normal testing, haven't coverted vsock and vdpa, and I know you guys >>>>> will not like the first patch. However, I think it better shows what >>>> Just to summarize the core idea behind my proposal is that no signal >>>> handling changes are needed unless there's a bug in the current way >>>> io_uring workers already work. All that should be needed is >>>> s/PF_IO_WORKER/PF_USER_WORKER/ in signal.c. >> [...] >>>> So it feels like this should be achievable by adding a callback to >>>> struct vhost_worker that get's called when vhost_worker() gets SIGKILL >>>> and that all the users of vhost workers are forced to implement. >>>> >>>> Yes, it is more work but I think that's the right thing to do and not to >>>> complicate our signal handling. >>>> >>>> Worst case if this can't be done fast enough we'll have to revert the >>>> vhost parts. I think the user worker parts are mostly sane and are >>> As mentioned, if we can't settle this cleanly before -rc4 we should >>> revert the vhost parts unless Linus wants to have it earlier. >> Meanwhile -rc5 is just a few days away and there are still a lot of >> discussions in the patch-set proposed to address the issues[1]. Which is >> kinda great (albeit also why I haven't given it a spin yet), but on the >> other hand makes we wonder: > You might've missed it in the thread but it seems everyone is currently > operating under the assumption that the preferred way is to fix this is > rather than revert. See the mail in [1]: > > "So I'd really like to finish this. Even if we end up with a hack or > two in signal handling that we can hopefully fix up later by having > vhost fix up some of its current assumptions." > > which is why no revert was send for -rc4. And there's a temporary fix we > seem to have converged on. > > @Mike, do you want to prepare an updated version of the temporary fix. > If @Linus prefers to just apply it directly he can just grab it from the > list rather than delaying it. Make sure to grab a Co-developed-by line > on this, @Mike. Yes, I'll send it within a couple hours.