Message ID | 1668624097-14884-2-git-send-email-mikelley@microsoft.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp7122wrr; Wed, 16 Nov 2022 10:49:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf5UlXOZ9T1BfLK+zUv5sjs/MGGAv5WRRd6Y9FcJCGTnKeddz3Moe16fkwgdGjzkMfI7/mro X-Received: by 2002:a17:906:458:b0:78d:98a7:2e7 with SMTP id e24-20020a170906045800b0078d98a702e7mr17274430eja.535.1668624588207; Wed, 16 Nov 2022 10:49:48 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1668624588; cv=pass; d=google.com; s=arc-20160816; b=ianpF2fi/6mDOrlGXhdc0vG97Uk4XUDBAJWOZa4EEYxoQiseVG51bzddFuKqfg+Axj DHie32hq2sgnNk/VBtnXgdnpi2QxqCmYzA6Hy8EXPapVCJXNvoCqKmX35ngYFAgmQoMd acYR589+zsq5ewng3pxzudX1sVS6IiMiqpyBc9GrmIG5/DHMKuadUnZ87Uvl64PTppRq Pl/eiG/uxu0qXT/vUXNGIB2dQmRhBP9b+P3Cr82/AFZqDuaoLGryUVwM18OPDglBCtw5 EHv/XRBFelKBxOKQknrxlaog8O2gFsxteOwiCv9d980exTua1XunhioTbj39SF0fWVxk bs7Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=455p0PAhRjkQDFU3d5FTc/8biS9vVijN2mB4Ou3h2T0=; b=Jtzg/VKzLKN6bdhatkzrQhE1/F+pTgew03kKMFpzTyvIoaobY1i9+SqZWi+LE/w8KA dNRbvdRjQnLT6ldgaSw4Y+GREGL10/inEiG0BBKlWiPDuwKXWQNFyD5w3GelCod8pjDt 2xOxdwx0tow9D7wTJ2za5Eq0pyxzoZjJmTQO2gVyS5g4G1dxhrTknza/NqQRx7rWKel2 xj8STcpScIKKN9cROmZlSH5XczDrmVZcqbyO4N044/PqGoHHAkqOsB7rqhKFenmCxsOf 3bHLZHKoOe2vrbra0JxnCaIblZYzfps6MfiE+lQoxiIr8f1yJQuww+c/+0eoz+eylERq JPdA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@microsoft.com header.s=selector2 header.b=dkMTWkb4; arc=pass (i=1 spf=pass spfdomain=microsoft.com dkim=pass dkdomain=microsoft.com dmarc=pass fromdomain=microsoft.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q21-20020a50cc95000000b00462af00c12bsi13381826edi.398.2022.11.16.10.49.24; Wed, 16 Nov 2022 10:49:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@microsoft.com header.s=selector2 header.b=dkMTWkb4; arc=pass (i=1 spf=pass spfdomain=microsoft.com dkim=pass dkdomain=microsoft.com dmarc=pass fromdomain=microsoft.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234406AbiKPSmf (ORCPT <rfc822;just.gull.subs@gmail.com> + 99 others); Wed, 16 Nov 2022 13:42:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230205AbiKPSm2 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 16 Nov 2022 13:42:28 -0500 Received: from na01-obe.outbound.protection.outlook.com (mail-westcentralusazon11022017.outbound.protection.outlook.com [40.93.200.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3521D18F; Wed, 16 Nov 2022 10:42:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EBvuciUlUFkMOLbeT1laddt3/Mn5qh9DCIsKzV9a7hGQ6tdqRJTRQhsYKG/aDYrzSY9lyIO9Jb0DobeF/InVOOqg5HZgq6gmYeTGDvnTwXzPRgzSFtcluRStPrTqY6Sp3LTjlgZHBEwL8ToC6HfvyotdWccNp90iiDlizFy0Yjxf3qr81SW/lb9912B550eln6mZhH1At7eb8FXFo0RDzuBBXgPLvL7s3rlheqSC2qS6sjbvJPJKOXbVo6eCWRSMv+1x5eefpi2VslVCIR7DGC5yvuDogL0oxYOjqJ1pS6ySefbiYwjmcoAFAsPjIBZtnrREPlyW4UaSxenWyqOR1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=455p0PAhRjkQDFU3d5FTc/8biS9vVijN2mB4Ou3h2T0=; b=ZHIzP3SdSnuqPgxJ7bbNThaEVBATrwhozDJ6ER2nNOketOb+/V66Gltxni71YHqP9WIH4nE8XUfOFaMmd4fuvUVHRtyU5bZL3UF7VqgdMA4V8fcY5Ei45vDA+ZtrVau+j8ZXJd51rJBUwLbtw85BPFtCFl878sk5UrMt0pDYnXXuboI2GePIVCKfkAieI+AdTJI6Bu6MPCfpwSTjQHqQ20KZ3wvgMIhP2iKW3GxsCHiSGQ6jSElPRL+OaaXROhdGd9S75v9UMxT1obu4YeNTFYkLtZyELZ/A0U2eV9t/lmioUURDPVJtRQu8ALLCAadGUOsz+kpO6Uid2eSFEXSaQQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=microsoft.com; dmarc=pass action=none header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=455p0PAhRjkQDFU3d5FTc/8biS9vVijN2mB4Ou3h2T0=; b=dkMTWkb4Tv0GKUYJGFB/XnCgQvkmnXqF2iELY9xxUEGOG/AhvaQvn9MVp+oY34YlOx8Fjjez0sqi5jDsq2aeMVrDpCNu8cOGZq4nyojt41+SFjjddrq73GVFs+yXh20Ub6Vt6qogJjUXwznJkf0SJCSaLzXGyEX+ritcF+Iz40E= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=microsoft.com; Received: from DM6PR21MB1370.namprd21.prod.outlook.com (2603:10b6:5:16b::28) by DM4PR21MB3130.namprd21.prod.outlook.com (2603:10b6:8:63::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.5; Wed, 16 Nov 2022 18:42:23 +0000 Received: from DM6PR21MB1370.namprd21.prod.outlook.com ([fe80::c3e3:a6ef:232c:299b]) by DM6PR21MB1370.namprd21.prod.outlook.com ([fe80::c3e3:a6ef:232c:299b%9]) with mapi id 15.20.5857.005; Wed, 16 Nov 2022 18:42:23 +0000 From: Michael Kelley <mikelley@microsoft.com> To: hpa@zytor.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, lpieralisi@kernel.org, robh@kernel.org, kw@linux.com, bhelgaas@google.com, arnd@arndb.de, hch@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, thomas.lendacky@amd.com, brijesh.singh@amd.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, Tianyu.Lan@microsoft.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, ak@linux.intel.com, isaku.yamahata@intel.com, dan.j.williams@intel.com, jane.chu@oracle.com, seanjc@google.com, tony.luck@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-pci@vger.kernel.org, linux-arch@vger.kernel.org, iommu@lists.linux.dev Cc: mikelley@microsoft.com Subject: [Patch v3 01/14] x86/ioremap: Fix page aligned size calculation in __ioremap_caller() Date: Wed, 16 Nov 2022 10:41:24 -0800 Message-Id: <1668624097-14884-2-git-send-email-mikelley@microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1668624097-14884-1-git-send-email-mikelley@microsoft.com> References: <1668624097-14884-1-git-send-email-mikelley@microsoft.com> Content-Type: text/plain X-ClientProxiedBy: MW4PR03CA0178.namprd03.prod.outlook.com (2603:10b6:303:8d::33) To DM6PR21MB1370.namprd21.prod.outlook.com (2603:10b6:5:16b::28) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR21MB1370:EE_|DM4PR21MB3130:EE_ X-MS-Office365-Filtering-Correlation-Id: ec1d932f-cb78-4508-9519-08dac8024d16 X-LD-Processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bg4Q4tCcZ2sC3QD3sPJrQbVhUJVzbLIaS+Y0MZqh59EMDBJIfnnQzSnXpOMtH9YBN85zhGKZCmOAD8dGea/K+kJUuvf5IzThEAIs+3j6LGliEQLSAapJrnNB7qaQLlhQplpTX1SPuqH6QgJoqRUJFSyMQT09GneHO9vLb1WC1WlF+IJ+pAGF8MirRvDT84Vpn3H+pcdHFViro+pwb/YR7ZQ5sM5x+iC9q+GaomQO7bMTAy5yTXmTzi4mgzGo5a/CoODPcDC7TRQhA04JzYTiSKi7U625uTnKIHB0+PnKai75fCK151ww4jgSJpKcTV15jrjfMLgUVBmk6vsdSv1IqIjIdS8cfeoT8duU1t/2BmRhSWPChMKuexpZ4aouvQxEATps1i2NF0HUXz/A+tdJdY9TKaYC1GJXvHdr6e94HPGnDXpaPdYRO1iX/sgRewjZL6Nlg9qDsRz36wUckGgbaM4JoXdyHVG9crE+dgbdAxU9PRC3gxhg9oj++5/4UHiZhmt1tvtTx3C4cLjNOwQWNqcAr4kmbDUcaqMFNYtUiFhZ9zks+eQlfTvN7hJq4jTvrJTtqmb+MgPEADdXwBo1i8BKG6qG13DJchWHzevYG+s5uUPX3YyDbw6pDxa6295K93WRwA+sntfoI3uHiLOlENQ3c9ei1ZKBh1HrXm38poesWzAvtzTgy7QD/SztJYipb5JjgC16tTkLbk7jD0yQwR/qXuZK2A6s0DOvX+b3YiG/r8/E21rT3fhVfOS0tCi7/4TC0e3OFkSWJfBLHeg7Dr3FMDHnLZKxf+MXrlVDfGROJN7blE9HiZeNKe/g1csE X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR21MB1370.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(366004)(396003)(39860400002)(136003)(451199015)(4326008)(5660300002)(8676002)(8936002)(7406005)(7416002)(2906002)(66556008)(66946007)(66476007)(41300700001)(107886003)(82950400001)(52116002)(2616005)(82960400001)(6512007)(26005)(36756003)(10290500003)(186003)(83380400001)(316002)(6486002)(478600001)(38100700002)(38350700002)(6506007)(86362001)(921005);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: X7GBVXybtAAG1M/A4Oydcn05zkMcKGT/tl6zb7KgeS4zfl+Uo6ECucz8uJ6UDzFgKqqed3IbShTu4LJUDeFJ70gUxrPPQTlJPlLaa4MoEXAtts/Mcvzaa72jVMewSLSZXNQHHiMU5Y++CxXGtyg4SWRxpS/3krTLuzOUWc86Wb3kR2mZ/YOe+WJuHC/pV1Wpeq3+CfS51caD2eejzaPGjZHCovsZLKbZIJlfzl2deB5fXG1LzuVHtSxrtS6q11jb6dlZjjylgG+eCgdzajewUY0c4Gi49B2/nhS7PxNui+MXoQM/eJc35R3DpwSo0ZmsVY5AlYQEfl9sWk2Rh7X7vQdhVXZpBo3Wtvw8HrSQrzvbGw+8qm/N1wi0zll0GXKJ7QGH6t57OmoLLhmfHStOZkwFSN4cfqWI2TwaeSXNoGexIgQvQUkWWtmjw/P0nPp019vEEH/qYpmoN2luFLGdKWdXt0J+SHxrU6jSD4rfzwFnCaLLWSgjHSv8qXLuMnW+1tlEt944FGolGY8bmKFm/9YY25gJI43EJojwo+P+NsJeNrbuPPJhP3bSNmOAsDMIPj1Li6EyZ+M2kknsfh14i7xX85LvTZcreN4TVv+YQ7l6/JdqNRZOppk6ERm0Tj6xdcwhdbQTpX750KHsvjYA5pnnQjLU60Xk64KG4Rh381ueubkE6GJX7PORVDJHRYMq0sbGO6PRKCA00nc6jCjPl6O0RkW6xt+xrHGuOOnbDDeQJZ+AKzTyz3ZzI5OpYxvcuGwGVJZEUaCyZDKY+zl0bdQcpFaQjWwbO35ymoeHVxwPfLMLWpKtKCxT8zelwdpkDxdyPV5zPOOZ5cj3s/k3phwcUx46snQHE+F+1FvoqPfivDkmV8uUeIFoU2YmveCMfsaFqdqPC3h+CYKgfQl1h2+RSWmCOepii9Kk3ZuxvsIQCPwxkM4zDg4zkJA2mceVx9X3qbdUecBNk08tq63qHypN5uj4pINkO5DmKIKphA3RWIgHJ2T8+3cN33LUdj8nUkGRCGLiCAcEt5zc6g/UvZ/tBfrOUgH2ahr/GYIh7NY2IwDDUqB7P3YrD4GzUg1Nd7FKFlk39pJ4oAlJeZrPjbOBeqtYg3oT4vAl6FBCQi3VOmzMBMUZrfHfILGl9vYFvbo+y0wLwvL6ZzIRhpHVgsi3LSpYyKxAtcYqy/McVDobtMy3C+RAKYuS543EmAj9KaG1bfzY2L0YXhl2PSD4pB0m5VKCi/3Lykhh2Y2VdqEUhlI/o9eiiXmGrF3wWIt2Pdnv7zjlfC1e84/gympKNm2d3bC8AEln2HEEXtYnAYDba8yZyjdDf6MJ8y9OJ3Lw8+TzrqngIyTJ/AL8FQrRfIjiAEVr/EYOIMldsf0h0UNaS7mDzZd23OK6eRboog1jnSqvT4EmO/DyrUXsXSlAuRN3HTC0poGv+qtA6s6za6+PvkPe6zTay//m/zGAWrsBp56U1sVepyw2czW+07UL8wEzxsVTUys+7rS/7KyeXl0C0AXWG8NILgqeOT+w2qdzSaCAeUL6l1vOzIQr1FqWbwP926ePkQb2DqjBUubPuNIzMav2N4+gui+FtOSm6VwD X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-Network-Message-Id: ec1d932f-cb78-4508-9519-08dac8024d16 X-MS-Exchange-CrossTenant-AuthSource: DM6PR21MB1370.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2022 18:42:23.7661 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: caJJseyWCU87+Dmbfb/3JTtlYiTYMy8sCqAP4oPtkgHnxxQNqp4f/8TB6qon7g7CRBW6Wgb0DFQpMvp5UFA/Mg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR21MB3130 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749679696050211710?= X-GMAIL-MSGID: =?utf-8?q?1749679696050211710?= |
Series |
Add PCI pass-thru support to Hyper-V Confidential VMs
|
|
Commit Message
Michael Kelley (LINUX)
Nov. 16, 2022, 6:41 p.m. UTC
Current code re-calculates the size after aligning the starting and
ending physical addresses on a page boundary. But the re-calculation
also embeds the masking of high order bits that exceed the size of
the physical address space (via PHYSICAL_PAGE_MASK). If the masking
removes any high order bits, the size calculation results in a huge
value that is likely to immediately fail.
Fix this by re-calculating the page-aligned size first. Then mask any
high order bits using PHYSICAL_PAGE_MASK.
Signed-off-by: Michael Kelley <mikelley@microsoft.com>
---
arch/x86/mm/ioremap.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
Comments
On Wed, Nov 16, 2022 at 10:41:24AM -0800, Michael Kelley wrote: > Current code re-calculates the size after aligning the starting and > ending physical addresses on a page boundary. But the re-calculation > also embeds the masking of high order bits that exceed the size of > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > removes any high order bits, the size calculation results in a huge > value that is likely to immediately fail. > > Fix this by re-calculating the page-aligned size first. Then mask any > high order bits using PHYSICAL_PAGE_MASK. > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> > --- > arch/x86/mm/ioremap.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c > index 78c5bc6..6453fba 100644 > --- a/arch/x86/mm/ioremap.c > +++ b/arch/x86/mm/ioremap.c > @@ -217,9 +217,15 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, > * Mappings have to be page-aligned > */ > offset = phys_addr & ~PAGE_MASK; > - phys_addr &= PHYSICAL_PAGE_MASK; > + phys_addr &= PAGE_MASK; > size = PAGE_ALIGN(last_addr+1) - phys_addr; > > + /* > + * Mask out any bits not part of the actual physical > + * address, like memory encryption bits. > + */ > + phys_addr &= PHYSICAL_PAGE_MASK; > + > retval = memtype_reserve(phys_addr, (u64)phys_addr + size, > pcm, &new_pcm); > if (retval) { > -- This looks like a fix to me that needs to go to independently to stable. And it would need a Fixes tag. /me does some git archeology... I guess this one: ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") should be old enough so that it goes to all relevant stable kernels... Hmm?
From: Borislav Petkov <bp@alien8.de> Sent: Monday, November 21, 2022 5:33 AM > > On Wed, Nov 16, 2022 at 10:41:24AM -0800, Michael Kelley wrote: > > Current code re-calculates the size after aligning the starting and > > ending physical addresses on a page boundary. But the re-calculation > > also embeds the masking of high order bits that exceed the size of > > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > > removes any high order bits, the size calculation results in a huge > > value that is likely to immediately fail. > > > > Fix this by re-calculating the page-aligned size first. Then mask any > > high order bits using PHYSICAL_PAGE_MASK. > > > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> > > --- > > arch/x86/mm/ioremap.c | 8 +++++++- > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c > > index 78c5bc6..6453fba 100644 > > --- a/arch/x86/mm/ioremap.c > > +++ b/arch/x86/mm/ioremap.c > > @@ -217,9 +217,15 @@ static void __ioremap_check_mem(resource_size_t addr, > unsigned long size, > > * Mappings have to be page-aligned > > */ > > offset = phys_addr & ~PAGE_MASK; > > - phys_addr &= PHYSICAL_PAGE_MASK; > > + phys_addr &= PAGE_MASK; > > size = PAGE_ALIGN(last_addr+1) - phys_addr; > > > > + /* > > + * Mask out any bits not part of the actual physical > > + * address, like memory encryption bits. > > + */ > > + phys_addr &= PHYSICAL_PAGE_MASK; > > + > > retval = memtype_reserve(phys_addr, (u64)phys_addr + size, > > pcm, &new_pcm); > > if (retval) { > > -- > > This looks like a fix to me that needs to go to independently to stable. > And it would need a Fixes tag. > > /me does some git archeology... > > I guess this one: > > ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") > > should be old enough so that it goes to all relevant stable kernels... > > Hmm? > As discussed in a parallel thread [1], the incorrect code here doesn't have any real impact in already released Linux kernels. It only affects the transition that my patch series implements to change the way vTOM is handled. I don't know what the tradeoffs are for backporting a fix that doesn't solve a real problem vs. just letting it be. Every backport carries some overhead in the process and there's always a non-zero risk of breaking something. I've leaned away from adding the "Fixes:" tag in such cases. But if it's better to go ahead and add the "Fixes:" tag for what's only a theoretical problem, I'm OK with doing so. Michael [1] https://lkml.org/lkml/2022/11/11/1348
On 11/16/22 10:41, Michael Kelley wrote: > Current code re-calculates the size after aligning the starting and > ending physical addresses on a page boundary. But the re-calculation > also embeds the masking of high order bits that exceed the size of > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > removes any high order bits, the size calculation results in a huge > value that is likely to immediately fail. > > Fix this by re-calculating the page-aligned size first. Then mask any > high order bits using PHYSICAL_PAGE_MASK. > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> Looks good: Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Although I do agree with Boris that this superficially looks like something that's important to backport. It would be best to either beef up the changelog to explain why that's not the case, or to treat this as an actual fix and submit separately.
On Mon, Nov 21, 2022 at 04:40:16PM +0000, Michael Kelley (LINUX) wrote: > As discussed in a parallel thread [1], the incorrect code here doesn't have > any real impact in already released Linux kernels. It only affects the > transition that my patch series implements to change the way vTOM > is handled. Are you sure? PHYSICAL_PAGE_MASK is controlled by __PHYSICAL_MASK which is determined by CONFIG_DYNAMIC_PHYSICAL_MASK and __PHYSICAL_MASK_SHIFT which all differ depending on configurations and also dynamic. It is probably still ok, in probably all possible cases even though I wouldn't bet on it. And this fix is simple and all clear so lemme ask it differently: what would be any downsides in backporting it to stable, just in case? > I don't know what the tradeoffs are for backporting a fix that doesn't solve > a real problem vs. just letting it be. Every backport carries some overhead > in the process Have you seen the deluge of stable fixes? :-) > and there's always a non-zero risk of breaking something. I don't see how this one would cause any breakage... > I've leaned away from adding the "Fixes:" tag in such cases. But if > it's better to go ahead and add the "Fixes:" tag for what's only a > theoretical problem, I'm OK with doing so. I think this is a good to have fix anyway as it is Obviously Correct(tm). Unless you have any reservations you haven't shared yet... > [1] https://lkml.org/lkml/2022/11/11/1348 Btw, the proper way to reference to a mail message now is simply to do: https://lore.kernel.org/r/<Message-ID> as long as it has been posted on some ML which lore archives. And I think it archives all. Thx.
From: Borislav Petkov <bp@alien8.de> Sent: Monday, November 21, 2022 11:45 AM > > On Mon, Nov 21, 2022 at 04:40:16PM +0000, Michael Kelley (LINUX) wrote: > > As discussed in a parallel thread [1], the incorrect code here doesn't have > > any real impact in already released Linux kernels. It only affects the > > transition that my patch series implements to change the way vTOM > > is handled. > > Are you sure? > > PHYSICAL_PAGE_MASK is controlled by __PHYSICAL_MASK which is determined > by CONFIG_DYNAMIC_PHYSICAL_MASK and __PHYSICAL_MASK_SHIFT which all > differ depending on configurations and also dynamic. > > It is probably still ok, in probably all possible cases even though I > wouldn't bet on it. > > And this fix is simple and all clear so lemme ask it differently: what > would be any downsides in backporting it to stable, just in case? None > > > I don't know what the tradeoffs are for backporting a fix that doesn't solve > > a real problem vs. just letting it be. Every backport carries some overhead > > in the process > > Have you seen the deluge of stable fixes? :-) > > > and there's always a non-zero risk of breaking something. > > I don't see how this one would cause any breakage... > > > I've leaned away from adding the "Fixes:" tag in such cases. But if > > it's better to go ahead and add the "Fixes:" tag for what's only a > > theoretical problem, I'm OK with doing so. > > I think this is a good to have fix anyway as it is Obviously > Correct(tm). > > Unless you have any reservations you haven't shared yet... > No reservations. I'll add the "Fixes:" tag. Michael
From: Dave Hansen <dave.hansen@intel.com> Sent: Monday, November 21, 2022 10:14 AM > > On 11/16/22 10:41, Michael Kelley wrote: > > Current code re-calculates the size after aligning the starting and > > ending physical addresses on a page boundary. But the re-calculation > > also embeds the masking of high order bits that exceed the size of > > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > > removes any high order bits, the size calculation results in a huge > > value that is likely to immediately fail. > > > > Fix this by re-calculating the page-aligned size first. Then mask any > > high order bits using PHYSICAL_PAGE_MASK. > > > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> > > Looks good: > > Acked-by: Dave Hansen <dave.hansen@linux.intel.com> > > Although I do agree with Boris that this superficially looks like > something that's important to backport. It would be best to either beef > up the changelog to explain why that's not the case, or to treat this as > an actual fix and submit separately. You and Boris agree and I have no objection, so I'll add the "Fixes:" tag. I'd like to keep the patch as part of this series because it *is* needed to make the series work. Michael
On Mon, Nov 21, 2022 at 09:04:06PM +0000, Michael Kelley (LINUX) wrote: > You and Boris agree and I have no objection, so I'll add the "Fixes:" tag. > I'd like to keep the patch as part of this series because it *is* needed to > make the series work. Yeah, no worries. I can take it tomorrow through urgent and send it to Linus this week so whatever you rebase your tree on, it should already have it.
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 78c5bc6..6453fba 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -217,9 +217,15 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, * Mappings have to be page-aligned */ offset = phys_addr & ~PAGE_MASK; - phys_addr &= PHYSICAL_PAGE_MASK; + phys_addr &= PAGE_MASK; size = PAGE_ALIGN(last_addr+1) - phys_addr; + /* + * Mask out any bits not part of the actual physical + * address, like memory encryption bits. + */ + phys_addr &= PHYSICAL_PAGE_MASK; + retval = memtype_reserve(phys_addr, (u64)phys_addr + size, pcm, &new_pcm); if (retval) {