Message ID | 20240217014513.7853-4-dongli.zhang@oracle.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-69658-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2685:b0:108:e6aa:91d0 with SMTP id mn5csp99292dyc; Fri, 16 Feb 2024 17:47:12 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWjKe54BdCbY2SKSJYL5ZcUN/l95VxVBpMvJ+/FfCwgSRDGgUMmnBQaaHXISpa2ctsJ6vOve1A6eQ5EFsZXnq/w4/KTTA== X-Google-Smtp-Source: AGHT+IHqXMb1VXfB+CU4iyBLKpLgDDcbcw12RQpWmgtTQlJe7mqU3SgaaC/iUC8GxFp28ooYNprc X-Received: by 2002:a17:902:ea0e:b0:1d9:e181:51b0 with SMTP id s14-20020a170902ea0e00b001d9e18151b0mr9806496plg.23.1708134432613; Fri, 16 Feb 2024 17:47:12 -0800 (PST) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id m8-20020a170902db0800b001d9a51ca7d7si711218plx.280.2024.02.16.17.47.12 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Feb 2024 17:47:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-69658-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-11-20 header.b=gtjkk76W; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=Yal9I9Be; arc=fail (signature failed); spf=pass (google.com: domain of linux-kernel+bounces-69658-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-69658-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 68982285B72 for <ouuuleilei@gmail.com>; Sat, 17 Feb 2024 01:47:12 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 295761D526; Sat, 17 Feb 2024 01:45:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="gtjkk76W"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="Yal9I9Be" Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83F9B1BC57; Sat, 17 Feb 2024 01:45:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.177.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708134348; cv=fail; b=B+uyiNg2dLINbJVu6GzOSX91Z3lQCMGCRp+V5GHlwSgkC/G0FqcqMO1g6w4vtYB0ytw078j7MqAbKfMF/GbLxYkKolg10EmUdVNvmxl3KGuL2y9Wb46Mi2rD2YtGP3icgOixoTyFt+E4oG9jQWj3HpHXpF5T+UvNYRx3/68cK9k= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708134348; c=relaxed/simple; bh=TDg9o7ZBGMT60pvLSYHzcUMQsX/F8kkBF8RK+ncQfOY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: Content-Type:MIME-Version; b=H5/zBC0LQhhTI/mKt7HZ2BXbw+4cyzHutsHVO9yK+vHsydeU8Q6gWRIVwvG9q5BzeIOtoi8pUDRY4WgD74eBvzllLphWvhCDAZhE+rTbQkb/zcim8GuAbI2cGDRJrl1LUGJdK01B9PhJ00L/AuLzekRElb3UFeQOTN4+iRSp6QU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=gtjkk76W; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=Yal9I9Be; arc=fail smtp.client-ip=205.220.177.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 41H1VtBj024446; Sat, 17 Feb 2024 01:45:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : content-type : mime-version; s=corp-2023-11-20; bh=Ntb1Scx9w4nhtLNCmMClIWIzCRRNq8fIE/TMjuQ1RRM=; b=gtjkk76WXeVFPSZCpOXSbw+ty5VtKEQRTbMeaytE7GXvY/1OEzOHnEHMpWADGWZmU5sR ZoVNra6rxO/k3BFRcoar3+1T4fqDiE/UAqcMT+d5NlNHd1fPUWV/ml8zfYB2CgPontWM NBB8FPdBsJ0qt/chXNhd+7Yz3O8kMheKpOb7cZDsI6wY4UTrfVWyl/X6JcHCa9YcfFUq SCVsdvL+RHNPd2mi8f7luPJ0LAZoe2cXQ7eaoUMWhBt3A5w+vLyTRTBk5o04qoinJoS9 mIeELyEksTJK/Vp0CE+t17dPFM0hkHkC1yFmuLZAloKuiPLKpn7Ml5Y5kCn3kx50oNme 0g== Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta02.appoci.oracle.com [147.154.114.232]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3wak7e80h1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 17 Feb 2024 01:45:43 +0000 Received: from pps.filterd (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 41H1X7MG007693; Sat, 17 Feb 2024 01:45:42 GMT Received: from nam11-co1-obe.outbound.protection.outlook.com (mail-co1nam11lp2168.outbound.protection.outlook.com [104.47.56.168]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3wak83g6ms-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 17 Feb 2024 01:45:42 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MstMeC1KFyiEBMyDzQPf2wyrMFvaaB8GplayZibqN7AKHoq4VokeuqoDQYkLCwQWepGUDyHYFEMiOXZ34ipbshpY4jJMXGD9mIabTYHBIwEFr64cfW56YFfBG/f7o710LpIoKF4uNs1DQ7OWQK7bkyhyDLuJdWFNzXJx7ZhiSTnriiUlHS6NCDNWr7DSuo9W44PpckzRQRkZlgtA2EFTSVImC5/i9vMiT5lgIkFmXfXGUn4sZb+knPQ6tTU8UKFiL8bCHY9HE9t6o7lpiGOpLZ0ptHZYRaNDmHvl61YaMEu+R+Dl8RopGagkicd0vjgBvrjMkADOauTqEI1x7A3YVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ntb1Scx9w4nhtLNCmMClIWIzCRRNq8fIE/TMjuQ1RRM=; b=OmVj32kOCBGoOV0XgaCuQxYuaSWZaej0Il415qR2pZPhSbYYTMM1j6/NeS8qs9y6pNN7Y77tK295pocDr7+J3I9thAWe2s5uh6yXqyp7TuamoIjMWNK2znuRQ8lrwes7ujTk9HyT8WNEPUv3ZZJDco4cm9GzowSs8/nKF+TEHeXbYrpRXG90sJMTPSBQV8BxIzxrJaP1D/3NQvwymIhFhTeZbb9xrfOsSzYLd1/vwLk8zd3N+B465vsEV4xE6kmFTxAZ+wbhibi+XV/quFS3V/v7Uf3rTmaIIAwmKhYvUwlUI3j1fBqfLKJyd3teN+FHJv8HQdOxq63WBTPkBXsbqQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ntb1Scx9w4nhtLNCmMClIWIzCRRNq8fIE/TMjuQ1RRM=; b=Yal9I9Be9/UBM/JQW7pR5QvxN2+hyzAFZwQXE3zdJaIXbqFFroFMs3hx0xxzJCkH7Ii62D0EFPoNiDQYW2mceRL6bpFlIu9iidQaJwEfv/zfzxaPhWPZyKcgwN5RlPd1EkXsIQE3A+4BU/7DGAuD55kdnqlkviYIAYKhEU/o97M= Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20) by DS7PR10MB7299.namprd10.prod.outlook.com (2603:10b6:8:ea::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.31; Sat, 17 Feb 2024 01:45:40 +0000 Received: from BYAPR10MB2663.namprd10.prod.outlook.com ([fe80::a1c5:b1ad:2955:e7a6]) by BYAPR10MB2663.namprd10.prod.outlook.com ([fe80::a1c5:b1ad:2955:e7a6%5]) with mapi id 15.20.7292.029; Sat, 17 Feb 2024 01:45:40 +0000 From: Dongli Zhang <dongli.zhang@oracle.com> To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] KVM: VMX: simplify MSR interception enable/disable Date: Fri, 16 Feb 2024 17:45:13 -0800 Message-Id: <20240217014513.7853-4-dongli.zhang@oracle.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240217014513.7853-1-dongli.zhang@oracle.com> References: <20240217014513.7853-1-dongli.zhang@oracle.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SJ0PR05CA0145.namprd05.prod.outlook.com (2603:10b6:a03:33d::30) To BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR10MB2663:EE_|DS7PR10MB7299:EE_ X-MS-Office365-Filtering-Correlation-Id: 405664a2-0ce1-42cd-d523-08dc2f5a2588 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nU41jD1ms6WeHeC6p9O5Dg6BN5LEl+8BlFwrWl0RZpezGRXD5GlSiQ0TL9aDL83QqdhrWMW20NqzAcRamGbPHHwI00pWpXVuFmDyZDQj6BN3jVOnXit2CEOxAOE4zLOgaESAJWIDppDDP1C2ewi0yMkTNipbQHlHVs4hhroeyhAN94znUd5ndshpRPeWZrf9zk9zb8G2wrk1Y96AjhG/lrou/P1LPB3kHNiYBjOlpcFMoDHozpdMmfdb5GnI1IiPn4PUg/Txz6F/X3WxyyJ6YyTak8JKXLIhl3H5OF8+Ula8AphajygMFcYv593vRuCa4LiE1TLqDyAIuXBVu6cl7Z1L/qv9t+D/IfkdfzLXTT0nZCKO1bihbPX7tXPGqekbGr+apF0GYihH0O8dOgnskUm3kTIV/Gjwl6W3r1Ey+qFjBsQv/Lg8ITXCIyNME5okIjv5TsgAYSHstOskgLl9Bq6pHP740rmwQeWF7wwek1iUKcr13Rgv3aTED8/avQvodhLluN3S6z1WaQpqthNHXcRenfUv0T2M9FLdM7LgT3s6uDc37tZrT4k0JHCE1bl8 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366004)(376002)(396003)(346002)(136003)(39860400002)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(44832011)(5660300002)(2906002)(83380400001)(1076003)(8936002)(41300700001)(66476007)(4326008)(8676002)(66556008)(6916009)(478600001)(6666004)(66946007)(2616005)(316002)(26005)(6512007)(6506007)(6486002)(36756003)(86362001)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: oe//3QP3H/zXrte2FPOH0McuwimFSldxLAHu+WVCgquA3WZdXwGlMM0+av/dAcSpv1iaSe1uaLiUdxzCezo+FU+sZqLj4PR5brbZgsYKlCidUN192LEN48eo6a2rJsS61sIcHP8Nm2BGYxhwGK7MmaYuPrE8ru+f/N8iL4bC9XJUxRWAI2ismCcI2Ixh2KgXoupcFggbhcc8Bm6V9mYEQ01ElmxZLP+SoYeSu98sJg9DfsLib5LNU0AP+fJO8T5/WVWLkw3a1mEf4y2QWTwmVum0qZFmQuxCwo0DuZRhwAEnfa3Oti6VcsxZ5SfiprFAVmFVKTosrbApIlOx1xNgqOteCfA1xfHFhemmLnwfULh8ob4id3VGI0AOb2OEmfMqYTYfJdClEthTXThO/9nSr1Y7QbcgfTrqDsZPRO0nYL7OmF4iyk6dlBgoDFgxhlhQmww4orVv6vT6wi0C4jQKsC2EjOufe0gxRPfpcXEtIX0SA/pmXFc7DBsZOQoXLZMj0pOoAWMy00sdiNlc9X3F+j8Kx+M2LKsyHg3pJGbp4JNM7JR4p/Mxuzetm3xG2B/lM1DA0yzEw/BiMtLg7Ek9X+ppy0HtC630VW4ptAgkD1AVOB5gPP0qEWi5Y5SW2q5Hwf+y1ftY1ZnbFFNueeWriXyjJhXEzhtqjyzWk6eTBYk+4H8vmX6iZz982si1nNZ6mx8QTEiZaGXMOwCbK8p+lqUfVdhDoEMrZME9spWQETmY+JwGp+ZbIRKeZum03d9db5KP5WuRXwoLVVal2UWUOcWO7FDt7+KWKrC2HbRxxcPEvurJUJsfqc5Di/7ej/4JQX0EcLzGlrMBUxbnnFGbX9TjuWvGtBYwuCngE7NU1LUnBhPD/8EEWJrQ3QqjkeFchVyIIslHgvF9P8rshNAwBbibe1u3w8b4pNTl6I2ncFJoxfqfXK6m+/u/a4N9DQ3R/aZvmVdeyMsgns1cXyW4exgOf5y+58RjTVJBVe4+B7nyy8M+PZ4j0bYALWCl/PLhubWu2TdDF+3F3RjasY785ejSPJCPEJAO9ZHH/fiyua1k4uG8E9ubjhDCQIvQWzVnmG/EEA8WugfCMQ1sTwSrSnw9/mC9KDjylLdby7oYmEyEGTgFby5jKeT0b0vPQ64Nq9wMuD3jGMTmDf1/4A9hWzeJ0+EYR3g5yyMb0zZ6KJ8CGDDfSDkcXcFAVuV/wzDncNkV4T2nUHd7HrwoK9V7xsasmge+fBQ9VQVWAuUIkSXE/RDDccr/7v4kfED5weB86joXLFVMT7gKYd703JKy6symxNikIbGDo4ZH2+NszcM55llcVd/QDUZICxWvcIKXve4I0rZ79ouS0GAXEuhkfduAjAFXJypSFO+mv8NTZSJwp7BATycv6sSQHOXEH+1ps8o8+YeQTMIujWkzCkIJZ7tfDbdPWH6pzVQglShUOi9KiBfCatdMKPsGinc3fFMEnXRKeKOdNXOMAOkziQCxWTn+In4jqbleZ7disuHbxb/W03LBMs7OYIggwmd1JnDMb22eAK4A7iX4WFuxmPg/HlBbEGJwrwqtapTNCAnqhG0ZG0eoRQalmLBOKc99DypGhi5xM28fOM8kPmp1wZcZZw== X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: M1yzI4qFxRC9yEHuT4ejhCKSnd4Jz6SaymCBjvwR0MI1SFfxa6uhAdx6p2CFJrfxBUC7mnUmRkuZ2HZxSZTRcDiub4sLNjTvn1nSL4K3WRrljvnx5mF8N0lXLyMiyr8kiDp7LDkbpTnaUpcTwC0B2cFx8ppx93Q+yhARXu0DNVwOxmdPTvxzAB9OWJVleG2eHqC7SB8I+58nNvZC3ryg+C73DVTc0aylMEzibMIe0fBrWmeKY4h9CL6j2cUD+tU+PNWEFthQOkwQvdnfGc5XaBJGPPck2UZ4zQXNz7S/8H6+VlNXeQJMopnlu+dKpOLvQI2FrhFJU1uHhAQXPwV3wDF9hcGwhxudfAcog4K4bbRWdQyB74Dd0+dR3YXBhNZju0iUbgJjwrWvsTZKU+f50PxcMKhjAdP23zEAJ6ad/Dznr/xK2YNZXKF3t3IPyWQjC2br5reqbi4t6y1PqMKkEAliUnw/8B63HwTMEkVMT13QIy2QWhyZ2RXuOfVJKgmLK18qgLczjMbFpiaPCA5GlrNkUmnzg3k16MvcRZGxtIZzbaJf98CyGqoY/rSH6W7DWx1kWIdDVtK/KbQb4ypIzSHTaVUQxkET8QvEjwrU6Y8= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 405664a2-0ce1-42cd-d523-08dc2f5a2588 X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Feb 2024 01:45:40.4225 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: bk1Y9pX0s5cfKOkg8QXWR+Ruf91JjMsRSEPF+qY4bqUjBsxdsVPXX1a8/Xw9qkXZipaVN8CiJ9DzvrjuMU9VWw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR10MB7299 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-16_25,2024-02-16_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 phishscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2402170010 X-Proofpoint-GUID: kEZkD_tMTGZ-ZqBLiZyHCrfExOPczk3P X-Proofpoint-ORIG-GUID: kEZkD_tMTGZ-ZqBLiZyHCrfExOPczk3P X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791108770866732482 X-GMAIL-MSGID: 1791108770866732482 |
Series |
KVM: VMX: MSR intercept/passthrough cleanup and simplification
|
|
Commit Message
Dongli Zhang
Feb. 17, 2024, 1:45 a.m. UTC
Currently, the is_valid_passthrough_msr() is only called by two sites:
vmx_enable_intercept_for_msr() and vmx_disable_intercept_for_msr(). The
is_valid_passthrough_msr() is called for two reasons.
1. Do WARN() if the input msr is neither x2APIC/PT/LBR passthrough MSRs,
nor the possible passthrough MSRs.
2. Return if the msr is a possible passthrough MSR.
While the is_valid_passthrough_msr() may traverse the
vmx_possible_passthrough_msrs[], the following
possible_passthrough_msr_slot() may traverse the save array again. There
is no need to call possible_passthrough_msr_slot() twice.
vmx_disable_intercept_for_msr()
-> is_valid_passthrough_msr()
-> possible_passthrough_msr_slot()
-> possible_passthrough_msr_slot()
Therefore, we merge the is_valid_passthrough_msr() and the following
possible_passthrough_msr_slot() into the same function:
- If the msr is not any passthrough MSR, WARN and return -ENOENT.
- Return VMX_OTHER_PASSTHROUGH if x2apic/PT/LBR.
- Return VMX_POSSIBLE_PASSTHROUGH and set possible_idx, if possible
passthrough MSRs.
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
arch/x86/kvm/vmx/vmx.c | 55 +++++++++++++++++++++---------------------
1 file changed, 28 insertions(+), 27 deletions(-)
Comments
On Fri, Feb 16, 2024, Dongli Zhang wrote: > --- > arch/x86/kvm/vmx/vmx.c | 55 +++++++++++++++++++++--------------------- > 1 file changed, 28 insertions(+), 27 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 5a866d3c2bc8..76dff0e7d8bd 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -669,14 +669,18 @@ static int possible_passthrough_msr_slot(u32 msr) > return -ENOENT; > } > > -static bool is_valid_passthrough_msr(u32 msr) > +#define VMX_POSSIBLE_PASSTHROUGH 1 > +#define VMX_OTHER_PASSTHROUGH 2 > +/* > + * Vefify if the msr is the passthrough MSRs. > + * Return the index in *possible_idx if it is a possible passthrough MSR. > + */ > +static int validate_passthrough_msr(u32 msr, int *possible_idx) There's no need for a custom tri-state return value or an out-param, just return the slot/-ENOENT. Not fully tested yet, but this should do the trick. From: Sean Christopherson <seanjc@google.com> Date: Mon, 19 Feb 2024 07:58:10 -0800 Subject: [PATCH] KVM: VMX: Combine "check" and "get" APIs for passthrough MSR lookups Combine possible_passthrough_msr_slot() and is_valid_passthrough_msr() into a single function, vmx_get_passthrough_msr_slot(), and have the combined helper return the slot on success, using a negative value to indiciate "failure". Combining the operations avoids iterating over the array of passthrough MSRs twice for relevant MSRs. Suggested-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/x86/kvm/vmx/vmx.c | 63 +++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 38 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 014cf47dc66b..969fd3aa0da3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -658,25 +658,14 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu) return flexpriority_enabled && lapic_in_kernel(vcpu); } -static int possible_passthrough_msr_slot(u32 msr) +static int vmx_get_passthrough_msr_slot(u32 msr) { - u32 i; - - for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) - if (vmx_possible_passthrough_msrs[i] == msr) - return i; - - return -ENOENT; -} - -static bool is_valid_passthrough_msr(u32 msr) -{ - bool r; + int i; switch (msr) { case 0x800 ... 0x8ff: /* x2APIC MSRs. These are handled in vmx_update_msr_bitmap_x2apic() */ - return true; + return -ENOENT; case MSR_IA32_RTIT_STATUS: case MSR_IA32_RTIT_OUTPUT_BASE: case MSR_IA32_RTIT_OUTPUT_MASK: @@ -691,14 +680,16 @@ static bool is_valid_passthrough_msr(u32 msr) case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ - return true; + return -ENOENT; } - r = possible_passthrough_msr_slot(msr) != -ENOENT; - - WARN(!r, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); + for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) { + if (vmx_possible_passthrough_msrs[i] == msr) + return i; + } - return r; + WARN(1, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); + return -ENOENT; } struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) @@ -3954,6 +3945,7 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; + int idx; if (!cpu_has_vmx_msr_bitmap()) return; @@ -3963,16 +3955,13 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) /* * Mark the desired intercept state in shadow bitmap, this is needed * for resync when the MSR filters change. - */ - if (is_valid_passthrough_msr(msr)) { - int idx = possible_passthrough_msr_slot(msr); - - if (idx != -ENOENT) { - if (type & MSR_TYPE_R) - clear_bit(idx, vmx->shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - clear_bit(idx, vmx->shadow_msr_intercept.write); - } + */ + idx = vmx_get_passthrough_msr_slot(msr); + if (idx >= 0) { + if (type & MSR_TYPE_R) + clear_bit(idx, vmx->shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + clear_bit(idx, vmx->shadow_msr_intercept.write); } if ((type & MSR_TYPE_R) && @@ -3998,6 +3987,7 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; + int idx; if (!cpu_has_vmx_msr_bitmap()) return; @@ -4008,15 +3998,12 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) * Mark the desired intercept state in shadow bitmap, this is needed * for resync when the MSR filter changes. */ - if (is_valid_passthrough_msr(msr)) { - int idx = possible_passthrough_msr_slot(msr); - - if (idx != -ENOENT) { - if (type & MSR_TYPE_R) - set_bit(idx, vmx->shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - set_bit(idx, vmx->shadow_msr_intercept.write); - } + idx = vmx_get_passthrough_msr_slot(msr); + if (idx >= 0) { + if (type & MSR_TYPE_R) + set_bit(idx, vmx->shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + set_bit(idx, vmx->shadow_msr_intercept.write); } if (type & MSR_TYPE_R) base-commit: 342c6dfc2a0ae893394a6f894acd1d1728c009f2 --
Hi Sean, On 2/19/24 14:33, Sean Christopherson wrote: > On Fri, Feb 16, 2024, Dongli Zhang wrote: >> --- >> arch/x86/kvm/vmx/vmx.c | 55 +++++++++++++++++++++--------------------- >> 1 file changed, 28 insertions(+), 27 deletions(-) >> >> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c >> index 5a866d3c2bc8..76dff0e7d8bd 100644 >> --- a/arch/x86/kvm/vmx/vmx.c >> +++ b/arch/x86/kvm/vmx/vmx.c >> @@ -669,14 +669,18 @@ static int possible_passthrough_msr_slot(u32 msr) >> return -ENOENT; >> } >> >> -static bool is_valid_passthrough_msr(u32 msr) >> +#define VMX_POSSIBLE_PASSTHROUGH 1 >> +#define VMX_OTHER_PASSTHROUGH 2 >> +/* >> + * Vefify if the msr is the passthrough MSRs. >> + * Return the index in *possible_idx if it is a possible passthrough MSR. >> + */ >> +static int validate_passthrough_msr(u32 msr, int *possible_idx) > > There's no need for a custom tri-state return value or an out-param, just return > the slot/-ENOENT. Not fully tested yet, but this should do the trick. The new patch looks good to me, from functionality's perspective. Just that the new patched function looks confusing. That's why I was adding the out-param initially to differentiate from different cases. The new vmx_get_passthrough_msr_slot() is just doing the trick by combining many jobs together: 1. Get the possible passthrough msr slot index. 2. For x2APIC/PT/LBR msr, return -ENOENT. 3. For other msr, return the same -ENOENT, with a WARN. The semantics of the function look confusing. If the objective is to return passthrough msr slot, why return -ENOENT for x2APIC/PT/LBR. Why both x2APIC/PT/LBR and other MSRs return the same -ENOENT, while the other MSRs may trigger WARN. (I know this is because the other MSRs do not belong to any passthrough MSRs). 661 static int vmx_get_passthrough_msr_slot(u32 msr) 662 { 663 int i; 664 665 switch (msr) { 666 case 0x800 ... 0x8ff: 667 /* x2APIC MSRs. These are handled in vmx_update_msr_bitmap_x2apic() */ 668 return -ENOENT; 669 case MSR_IA32_RTIT_STATUS: 670 case MSR_IA32_RTIT_OUTPUT_BASE: 671 case MSR_IA32_RTIT_OUTPUT_MASK: 672 case MSR_IA32_RTIT_CR3_MATCH: 673 case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: 674 /* PT MSRs. These are handled in pt_update_intercept_for_msr() */ 675 case MSR_LBR_SELECT: 676 case MSR_LBR_TOS: 677 case MSR_LBR_INFO_0 ... MSR_LBR_INFO_0 + 31: 678 case MSR_LBR_NHM_FROM ... MSR_LBR_NHM_FROM + 31: 679 case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 31: 680 case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: 681 case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: 682 /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ 683 return -ENOENT; 684 } 685 686 for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) { 687 if (vmx_possible_passthrough_msrs[i] == msr) 688 return i; 689 } 690 691 WARN(1, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); 692 return -ENOENT; 693 } The patch looks good to me. Thank you very much! Dongli Zhang > > From: Sean Christopherson <seanjc@google.com> > Date: Mon, 19 Feb 2024 07:58:10 -0800 > Subject: [PATCH] KVM: VMX: Combine "check" and "get" APIs for passthrough MSR > lookups > > Combine possible_passthrough_msr_slot() and is_valid_passthrough_msr() > into a single function, vmx_get_passthrough_msr_slot(), and have the > combined helper return the slot on success, using a negative value to > indiciate "failure". > > Combining the operations avoids iterating over the array of passthrough > MSRs twice for relevant MSRs. > > Suggested-by: Dongli Zhang <dongli.zhang@oracle.com> > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kvm/vmx/vmx.c | 63 +++++++++++++++++------------------------- > 1 file changed, 25 insertions(+), 38 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 014cf47dc66b..969fd3aa0da3 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -658,25 +658,14 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu) > return flexpriority_enabled && lapic_in_kernel(vcpu); > } > > -static int possible_passthrough_msr_slot(u32 msr) > +static int vmx_get_passthrough_msr_slot(u32 msr) > { > - u32 i; > - > - for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) > - if (vmx_possible_passthrough_msrs[i] == msr) > - return i; > - > - return -ENOENT; > -} > - > -static bool is_valid_passthrough_msr(u32 msr) > -{ > - bool r; > + int i; > > switch (msr) { > case 0x800 ... 0x8ff: > /* x2APIC MSRs. These are handled in vmx_update_msr_bitmap_x2apic() */ > - return true; > + return -ENOENT; > case MSR_IA32_RTIT_STATUS: > case MSR_IA32_RTIT_OUTPUT_BASE: > case MSR_IA32_RTIT_OUTPUT_MASK: > @@ -691,14 +680,16 @@ static bool is_valid_passthrough_msr(u32 msr) > case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: > case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: > /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ > - return true; > + return -ENOENT; > } > > - r = possible_passthrough_msr_slot(msr) != -ENOENT; > - > - WARN(!r, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); > + for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) { > + if (vmx_possible_passthrough_msrs[i] == msr) > + return i; > + } > > - return r; > + WARN(1, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); > + return -ENOENT; > } > > struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) > @@ -3954,6 +3945,7 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; > + int idx; > > if (!cpu_has_vmx_msr_bitmap()) > return; > @@ -3963,16 +3955,13 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) > /* > * Mark the desired intercept state in shadow bitmap, this is needed > * for resync when the MSR filters change. > - */ > - if (is_valid_passthrough_msr(msr)) { > - int idx = possible_passthrough_msr_slot(msr); > - > - if (idx != -ENOENT) { > - if (type & MSR_TYPE_R) > - clear_bit(idx, vmx->shadow_msr_intercept.read); > - if (type & MSR_TYPE_W) > - clear_bit(idx, vmx->shadow_msr_intercept.write); > - } > + */ > + idx = vmx_get_passthrough_msr_slot(msr); > + if (idx >= 0) { > + if (type & MSR_TYPE_R) > + clear_bit(idx, vmx->shadow_msr_intercept.read); > + if (type & MSR_TYPE_W) > + clear_bit(idx, vmx->shadow_msr_intercept.write); > } > > if ((type & MSR_TYPE_R) && > @@ -3998,6 +3987,7 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; > + int idx; > > if (!cpu_has_vmx_msr_bitmap()) > return; > @@ -4008,15 +3998,12 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) > * Mark the desired intercept state in shadow bitmap, this is needed > * for resync when the MSR filter changes. > */ > - if (is_valid_passthrough_msr(msr)) { > - int idx = possible_passthrough_msr_slot(msr); > - > - if (idx != -ENOENT) { > - if (type & MSR_TYPE_R) > - set_bit(idx, vmx->shadow_msr_intercept.read); > - if (type & MSR_TYPE_W) > - set_bit(idx, vmx->shadow_msr_intercept.write); > - } > + idx = vmx_get_passthrough_msr_slot(msr); > + if (idx >= 0) { > + if (type & MSR_TYPE_R) > + set_bit(idx, vmx->shadow_msr_intercept.read); > + if (type & MSR_TYPE_W) > + set_bit(idx, vmx->shadow_msr_intercept.write); > } > > if (type & MSR_TYPE_R) > > base-commit: 342c6dfc2a0ae893394a6f894acd1d1728c009f2
On Tue, Feb 20, 2024, Dongli Zhang wrote: > Hi Sean, > > On 2/19/24 14:33, Sean Christopherson wrote: > > On Fri, Feb 16, 2024, Dongli Zhang wrote: > >> --- > >> arch/x86/kvm/vmx/vmx.c | 55 +++++++++++++++++++++--------------------- > >> 1 file changed, 28 insertions(+), 27 deletions(-) > >> > >> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > >> index 5a866d3c2bc8..76dff0e7d8bd 100644 > >> --- a/arch/x86/kvm/vmx/vmx.c > >> +++ b/arch/x86/kvm/vmx/vmx.c > >> @@ -669,14 +669,18 @@ static int possible_passthrough_msr_slot(u32 msr) > >> return -ENOENT; > >> } > >> > >> -static bool is_valid_passthrough_msr(u32 msr) > >> +#define VMX_POSSIBLE_PASSTHROUGH 1 > >> +#define VMX_OTHER_PASSTHROUGH 2 > >> +/* > >> + * Vefify if the msr is the passthrough MSRs. > >> + * Return the index in *possible_idx if it is a possible passthrough MSR. > >> + */ > >> +static int validate_passthrough_msr(u32 msr, int *possible_idx) > > > > There's no need for a custom tri-state return value or an out-param, just return > > the slot/-ENOENT. Not fully tested yet, but this should do the trick. > > The new patch looks good to me, from functionality's perspective. > > Just that the new patched function looks confusing. That's why I was adding the > out-param initially to differentiate from different cases. > > The new vmx_get_passthrough_msr_slot() is just doing the trick by combining many > jobs together: > > 1. Get the possible passthrough msr slot index. > > 2. For x2APIC/PT/LBR msr, return -ENOENT. > > 3. For other msr, return the same -ENOENT, with a WARN. > > The semantics of the function look confusing. > > If the objective is to return passthrough msr slot, why return -ENOENT for > x2APIC/PT/LBR. Because there is no "slot" for them in vmx_possible_passthrough_msrs, and the main purpose of the helpers is to get that slot in order to efficiently update the MSR bitmaps in response to userspace MSR filter changes. The WARN is an extra sanity check to ensure that KVM doesn't start passing through an MSR without adding the MSR to vmx_possible_passthrough_msrs (or special casing it a la XAPIC, PT, and LBR MSRS). > Why both x2APIC/PT/LBR and other MSRs return the same -ENOENT, while the other > MSRs may trigger WARN. (I know this is because the other MSRs do not belong to > any passthrough MSRs). The x2APIC/PT/LBR MSRs are given special treatment: KVM may pass them through to the guest, but unlike the "regular" passthrough MSRs, userspace is NOT allowed to override that behavior via MSR filters. And so as mentioned above, they don't have a slot in vmx_possible_passthrough_msrs.
On 2/21/24 07:43, Sean Christopherson wrote: > On Tue, Feb 20, 2024, Dongli Zhang wrote: >> Hi Sean, >> >> On 2/19/24 14:33, Sean Christopherson wrote: >>> On Fri, Feb 16, 2024, Dongli Zhang wrote: >>>> --- >>>> arch/x86/kvm/vmx/vmx.c | 55 +++++++++++++++++++++--------------------- >>>> 1 file changed, 28 insertions(+), 27 deletions(-) >>>> >>>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c >>>> index 5a866d3c2bc8..76dff0e7d8bd 100644 >>>> --- a/arch/x86/kvm/vmx/vmx.c >>>> +++ b/arch/x86/kvm/vmx/vmx.c >>>> @@ -669,14 +669,18 @@ static int possible_passthrough_msr_slot(u32 msr) >>>> return -ENOENT; >>>> } >>>> >>>> -static bool is_valid_passthrough_msr(u32 msr) >>>> +#define VMX_POSSIBLE_PASSTHROUGH 1 >>>> +#define VMX_OTHER_PASSTHROUGH 2 >>>> +/* >>>> + * Vefify if the msr is the passthrough MSRs. >>>> + * Return the index in *possible_idx if it is a possible passthrough MSR. >>>> + */ >>>> +static int validate_passthrough_msr(u32 msr, int *possible_idx) >>> >>> There's no need for a custom tri-state return value or an out-param, just return >>> the slot/-ENOENT. Not fully tested yet, but this should do the trick. >> >> The new patch looks good to me, from functionality's perspective. >> >> Just that the new patched function looks confusing. That's why I was adding the >> out-param initially to differentiate from different cases. >> >> The new vmx_get_passthrough_msr_slot() is just doing the trick by combining many >> jobs together: >> >> 1. Get the possible passthrough msr slot index. >> >> 2. For x2APIC/PT/LBR msr, return -ENOENT. >> >> 3. For other msr, return the same -ENOENT, with a WARN. >> >> The semantics of the function look confusing. >> >> If the objective is to return passthrough msr slot, why return -ENOENT for >> x2APIC/PT/LBR. > > Because there is no "slot" for them in vmx_possible_passthrough_msrs, and the > main purpose of the helpers is to get that slot in order to efficiently update > the MSR bitmaps in response to userspace MSR filter changes. The WARN is an extra > sanity check to ensure that KVM doesn't start passing through an MSR without > adding the MSR to vmx_possible_passthrough_msrs (or special casing it a la XAPIC, > PT, and LBR MSRS). > >> Why both x2APIC/PT/LBR and other MSRs return the same -ENOENT, while the other >> MSRs may trigger WARN. (I know this is because the other MSRs do not belong to >> any passthrough MSRs). > > The x2APIC/PT/LBR MSRs are given special treatment: KVM may pass them through to > the guest, but unlike the "regular" passthrough MSRs, userspace is NOT allowed to > override that behavior via MSR filters. > > And so as mentioned above, they don't have a slot in vmx_possible_passthrough_msrs. Thank you very much for the explanation! This looks good to me. Dongli Zhang
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5a866d3c2bc8..76dff0e7d8bd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -669,14 +669,18 @@ static int possible_passthrough_msr_slot(u32 msr) return -ENOENT; } -static bool is_valid_passthrough_msr(u32 msr) +#define VMX_POSSIBLE_PASSTHROUGH 1 +#define VMX_OTHER_PASSTHROUGH 2 +/* + * Vefify if the msr is the passthrough MSRs. + * Return the index in *possible_idx if it is a possible passthrough MSR. + */ +static int validate_passthrough_msr(u32 msr, int *possible_idx) { - bool r; - switch (msr) { case 0x800 ... 0x8ff: /* x2APIC MSRs. These are handled in vmx_update_msr_bitmap_x2apic() */ - return true; + return VMX_OTHER_PASSTHROUGH; case MSR_IA32_RTIT_STATUS: case MSR_IA32_RTIT_OUTPUT_BASE: case MSR_IA32_RTIT_OUTPUT_MASK: @@ -691,14 +695,17 @@ static bool is_valid_passthrough_msr(u32 msr) case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ - return true; + return VMX_OTHER_PASSTHROUGH; } - r = possible_passthrough_msr_slot(msr) != -ENOENT; + *possible_idx = possible_passthrough_msr_slot(msr); + WARN(*possible_idx == -ENOENT, + "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); - WARN(!r, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr); + if (*possible_idx >= 0) + return VMX_POSSIBLE_PASSTHROUGH; - return r; + return -ENOENT; } struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) @@ -3954,6 +3961,7 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; + int idx; if (!cpu_has_vmx_msr_bitmap()) return; @@ -3963,16 +3971,12 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) /* * Mark the desired intercept state in shadow bitmap, this is needed * for resync when the MSR filters change. - */ - if (is_valid_passthrough_msr(msr)) { - int idx = possible_passthrough_msr_slot(msr); - - if (idx != -ENOENT) { - if (type & MSR_TYPE_R) - clear_bit(idx, vmx->shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - clear_bit(idx, vmx->shadow_msr_intercept.write); - } + */ + if (validate_passthrough_msr(msr, &idx) == VMX_POSSIBLE_PASSTHROUGH) { + if (type & MSR_TYPE_R) + clear_bit(idx, vmx->shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + clear_bit(idx, vmx->shadow_msr_intercept.write); } if ((type & MSR_TYPE_R) && @@ -3998,6 +4002,7 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; + int idx; if (!cpu_has_vmx_msr_bitmap()) return; @@ -4008,15 +4013,11 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type) * Mark the desired intercept state in shadow bitmap, this is needed * for resync when the MSR filter changes. */ - if (is_valid_passthrough_msr(msr)) { - int idx = possible_passthrough_msr_slot(msr); - - if (idx != -ENOENT) { - if (type & MSR_TYPE_R) - set_bit(idx, vmx->shadow_msr_intercept.read); - if (type & MSR_TYPE_W) - set_bit(idx, vmx->shadow_msr_intercept.write); - } + if (validate_passthrough_msr(msr, &idx) == VMX_POSSIBLE_PASSTHROUGH) { + if (type & MSR_TYPE_R) + set_bit(idx, vmx->shadow_msr_intercept.read); + if (type & MSR_TYPE_W) + set_bit(idx, vmx->shadow_msr_intercept.write); } if (type & MSR_TYPE_R)