Message ID | 1668147701-4583-2-git-send-email-mikelley@microsoft.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp572740wru; Thu, 10 Nov 2022 22:23:54 -0800 (PST) X-Google-Smtp-Source: AA0mqf45RMEBWE7Ctlxp43JxePDOFjA/LOfenpRQjkOAO93ZFncQnQGt752kQb8SCUTdeDnOZpoM X-Received: by 2002:a05:6402:401b:b0:467:539f:b545 with SMTP id d27-20020a056402401b00b00467539fb545mr236520eda.141.1668147834857; Thu, 10 Nov 2022 22:23:54 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1668147834; cv=pass; d=google.com; s=arc-20160816; b=zFDE9SjFNNzvj82e1miAKluzNnXrey/2zeIEy8phcg5qv9+0U05llwNmxKeV8wtYQb W00vBM/cG4fM1vXHXuFwAJqtRIJni6tB9hkPDPCjo21WvDjfk58PCUDu3jrssNjCFxd/ 8VMcmsPoytQQj5aPpMRBvIeNFck2R87zt1dCrcZGPFieaPlIv/WYsXp4vxkDUsm8q8Mq TlNrKob3bm4KaaChONZtbYXpndkR9wKQfk412ZheYQqLBf/U5JhcC0MyQWdSnnuamNE+ 34JYfihnn4FX55grZji6ypVjRskkvX+NwvRolspuPMXI62npPo3LrOjIdrVY9x1L8gec xFTA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=gFDQLJOQVpUWkm3kDdRVko3vVZx0WrPkQzObAxlAqGc=; b=TFfm0dRyVfd0eTSBXBMcrw8ljfThdBQ/cH1+tbSKpFi1Mvo34c3bKjdG5C32cyu8RN uiPp80FjNYe+p36QR63tMkKyc6dhKVyUin0d5z3p5Ugfq2sGzRrP117D28FWAYVCWBqv iQVGHnsaw68HoWu1Fxci67kdD7V0xrZ7JdRVJeeNryOT5LjZgUl5RyatDtELHcCbib+H 1Rttn6+TIUqigWfPaWTOjysIpdLIFa3L05bABk41MM89pDTj5tb4ZMrqFHJE/e4yF2OT qEbRs3a56nlW0GDmnvruvqwkevpTjIdnB44BvzZuxLxmWaNmK2eZsK4m3RBGFy87wOpw LuBg== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@microsoft.com header.s=selector2 header.b=WJOFgTUj; arc=pass (i=1 spf=pass spfdomain=microsoft.com dkim=pass dkdomain=microsoft.com dmarc=pass fromdomain=microsoft.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o22-20020a170906975600b007ae8b1704b9si1481878ejy.406.2022.11.10.22.23.30; Thu, 10 Nov 2022 22:23:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@microsoft.com header.s=selector2 header.b=WJOFgTUj; arc=pass (i=1 spf=pass spfdomain=microsoft.com dkim=pass dkdomain=microsoft.com dmarc=pass fromdomain=microsoft.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232575AbiKKGWh (ORCPT <rfc822;winker.wchi@gmail.com> + 99 others); Fri, 11 Nov 2022 01:22:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232796AbiKKGWY (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 11 Nov 2022 01:22:24 -0500 Received: from na01-obe.outbound.protection.outlook.com (mail-eastusazon11022018.outbound.protection.outlook.com [52.101.53.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A353663F7; Thu, 10 Nov 2022 22:22:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Dyd+tNPoq+DsxYcFVp5ChX4QzWVDolygZioLO6CXDK2dYbMz+8yQUA4yGx9gCWfw+0xGLaAoMswyANPP/IEZ27VuIAYKL03WUxvVUoCklDrqSpyDLKCCbO/hO2NdiYONdtH8LqotzfFHxHr4pSwP9mQoR2YKjC2yL3j7U3InVJ/GaOiYhj7VVVOiD3OjfnEvFwZqZWOh/4izx5gEjeSRHlFcSgczBzJEe8A8ttR/movJlSrh787oZFqQrGS4DHKLn1wyivEteYQaDMuRB2qJKvmDQZorPHqZgE3kR62SC5SuUV3ElEh2o50SBxu+tdaJZhtWHrNMDs9MLiIHG6VepQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gFDQLJOQVpUWkm3kDdRVko3vVZx0WrPkQzObAxlAqGc=; b=EUCyzmkL9E0xfHGyVi0or0/DEkE8oYk71P05Ucrudhc0Q+9TEV74UCYg3Z+0uBuoHTQH5AE1wktj/UDh1BYuGaCr4N6YwYMWyn519wgWifvBcJLtePC4EEFQAMGZd37NcaxFvFVsEX2CmjjjvV4/9+0cpKyBaE04zRig1bx9jx7e4AsmgWCQ9MA448+JxBpxQ0yos9MfL/cK8pkxjshAXrd8ftb0DpYxEuDbqbjCcbTBkuWOABOEhNbIpNaBTeBUlLAwSaC81YX/LT7suP4ajlPH14CK9FiPtYyu7ug6oQtaWyloYNwRX0s5Gpjsaeuey3Lo+r6sqo1oWNE0pBmyHw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=microsoft.com; dmarc=pass action=none header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gFDQLJOQVpUWkm3kDdRVko3vVZx0WrPkQzObAxlAqGc=; b=WJOFgTUjULQoGtmlR+eX3trNyQrZcAhvxAHkfWqwJLrnOoPE91eFL+qWJ2biQQjC8VuPVqqsaFrFOMcOAszvwH04RkzfB8NAFzrAbA8w2QiPsZrczIWpm4G22oKDHUAeVkBg9uhnLHIfVsQQl7MgySuP7alzff46jN8ALRyTYDo= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=microsoft.com; Received: from DM6PR21MB1370.namprd21.prod.outlook.com (2603:10b6:5:16b::28) by MW4PR21MB1857.namprd21.prod.outlook.com (2603:10b6:303:74::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5834.2; Fri, 11 Nov 2022 06:22:11 +0000 Received: from DM6PR21MB1370.namprd21.prod.outlook.com ([fe80::c3e3:a6ef:232c:299b]) by DM6PR21MB1370.namprd21.prod.outlook.com ([fe80::c3e3:a6ef:232c:299b%7]) with mapi id 15.20.5834.002; Fri, 11 Nov 2022 06:22:11 +0000 From: Michael Kelley <mikelley@microsoft.com> To: hpa@zytor.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, lpieralisi@kernel.org, robh@kernel.org, kw@linux.com, bhelgaas@google.com, arnd@arndb.de, hch@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, thomas.lendacky@amd.com, brijesh.singh@amd.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, Tianyu.Lan@microsoft.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, ak@linux.intel.com, isaku.yamahata@intel.com, dan.j.williams@intel.com, jane.chu@oracle.com, seanjc@google.com, tony.luck@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-pci@vger.kernel.org, linux-arch@vger.kernel.org, iommu@lists.linux.dev Cc: mikelley@microsoft.com Subject: [PATCH v2 01/12] x86/ioremap: Fix page aligned size calculation in __ioremap_caller() Date: Thu, 10 Nov 2022 22:21:30 -0800 Message-Id: <1668147701-4583-2-git-send-email-mikelley@microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1668147701-4583-1-git-send-email-mikelley@microsoft.com> References: <1668147701-4583-1-git-send-email-mikelley@microsoft.com> Content-Type: text/plain X-ClientProxiedBy: MW4PR04CA0329.namprd04.prod.outlook.com (2603:10b6:303:82::34) To DM6PR21MB1370.namprd21.prod.outlook.com (2603:10b6:5:16b::28) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR21MB1370:EE_|MW4PR21MB1857:EE_ X-MS-Office365-Filtering-Correlation-Id: a8a19a48-a701-49c6-8330-08dac3ad114e X-LD-Processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: O942vygdyu6sGmiqdX4F83fJs4RaAhEShbt53qNcCJsXTz3OFg2fPnbVX5TiDnY3m0u9BafDH0T6hQocu45Q2B/HZOoqIGDrGsErYrP30abMnxoyh/zkuOm36fkglhLz0iR6B4M+PJsZ7Ah2Kl24sG1KE+beeW4LQg1IRktiIAPeypNPAhYrlnP22HGr9Hi4PM0JeEsIADtv+OvqEIuLVbAx64m4BY8VcC4PKBIsIgOZyy50+d/CD3ZMADsuKSYgLsjpLpl5hY5P255xx+wsPBzCIyMyQ7QweSSKFvCnITRnI/9viRYrYfDdUNXK95BQmeGzV3Q3c7dwSjbdn6gZMsbThGb1FE+ZDHyOu1zPPqweTCEZ0qUMY23850m+wWpIVfWMOpK+yIYnE/l2C9pJjhz/4cjM90PmFPmS1Lm1An7m9zH3n1eBfPV1FHueMopo2bgE4XiRrpTAXDfIACGlIscBE9rMbf42Lb71kw+l7xJoalmqLuuXXCVUQl9IOQkwm2f/H1zQceys6pSrL91PVhY7OILygrwcfQxKnPFLf0no2pUA8K5tJQUhgCdvWLIWNLxzsKEhR6mDKoIgzxuj6tON4GRNX3iWnbrEdc4dCGsoowpLwFduEknDc6+l7MpyL08muAv0g8Z806J8ekieRwbml2CoKr5My6/WBkdKF71YWzuMJnCCh/IW3Kh5UxWrnOWcsOe4/RcjrHxQltUO43QLaU4b1FqrlBCei6leAj5WwNv/lbQSKbI3lU6zpE616EyzJa7tErLyzF6mSstSnS/HdeGvqxe5E6qARICmb/QljounYUY0Ev8Ln3hPLliLltj3vHTp87G3VV1QEHB/gg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR21MB1370.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(376002)(366004)(136003)(396003)(451199015)(83380400001)(52116002)(26005)(6666004)(6512007)(186003)(38100700002)(107886003)(2616005)(4744005)(7406005)(2906002)(316002)(7416002)(6506007)(10290500003)(6486002)(66946007)(66476007)(5660300002)(8936002)(41300700001)(8676002)(4326008)(66556008)(478600001)(38350700002)(36756003)(86362001)(921005)(82960400001)(82950400001);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Z4apJ6YhEdSmsnSzYynciuGRev2zgVUIRyNkWEQ9ftfc+t2mqNmuJn7bejzpAQc+wOB1zOHgzOBl6ebOh2DvSkVW77Tz8fD92SfH4pxEDK0X/vMN/ntqGKOVlgMUtGVQUCNg7TEnIFV+ptg9xppcRYIj6BOzWm48qzyjxmiyZRHbDymYmEP1+Qh5sIoV2upL+Fa4TMog82lJEsKHv1cGQ3aE8w8rqzz+1VXUwzvBJwEEWSLpZFrNGqnpQuQuKDpvzvkwsXWeIUI+4GmJwo4tONGsS0KEd4QbfOhuYG0tblEtOI1CbByTtgmmCF9pHpms3lFzqsq6RnvY6WGVU8CytJR+aJy3qlDFU/JBPxIxIkcziefodgHsmz+gQh+CF2O2ynz1AB2EUd5lgMuo+oP4zlNSpH1+USP3CAjE0CjHHpMty2lRS9LAQSke9feAGBlLHakRVK5KVeUOOJwMb5ZVR8pXSfsDngnxBS9F3IGi7nk2Jt3EHb7N/nmlGxahcaxbZztPQvDk0eCU8LJCWwtrlRBjmKO4nOe18BEu2qTL1Nk1+/hSLnKxitX6otzbSq/icVstTtZebgqL5VHWzoEle2RMZavp+iPyUDkNDu7LAFL9QoWY8k65Idu2dr0sCMtH+1tbvc7VgIUD0+zcrlTdAssiQhQc+ZtXwdws3PKEb6VCLPqmVWg1CkmhW3zwLx/Nvs5agqfFTwCvBqHx3bN1B9dd0QlbK/pyWs7UUXd8t7eqeD9u/mW9+P6v//Y+kZ69iLb5moxh+RO4WRdGFTJgKFmLGCdwFbzlz2dzP1kSLbxYGiZL9cXrXYZC7J4zGiCQj7Tnqk9HiBHAwqvdV/7WPO8sdjKz1MwJJ12RNMpo3IvordwCs0/8lPYXpoXiKqWd4pBPIolpi3GxE5f18hD4mwLfJPrMDkTKbLxVDCLM8u1qH+mIx3UqwvZbSdM7aCYPW4SX/ODU67brTa6FNMgM8fnBT779P2jtLXWu80vh5+AsccxEyjKpJ+AkpzIBpMSPJwDhn0AUfjsgvbUp20sbXOxR6Qfh4eadV1LxjcJEwMn7wWnAWCLab2tllAobQBe+aGy77GgnM2eusNDpVaBqR+3oXs3mYQ0t6SfEEYfNnBvTTim26l5BZT5g998luPRBSAaAW7mdhayNedhtucuvoQjsTx/HLISmFB7AQP8R4/X8fszRozRM9F1Fhlbi7y4udR71gq35MvzRavEkOTozQl3UB23bApwUq76rAnvh/Os0L+P3MbeQ2F1w5Be5+9uCmw9XlavBzQ23CgCvfZLPOEVusVvrSy9PXTsa5SnC50zgAtbVMSoGtW1gt2J/GtWZyjwCxZSlM2Qul6cAvU2QnaayEpgicLbRyLBnRe3FB3UD2DCo4peb/G8V7RCo7Em2CpS33TMu2IIUd/CdQADYLiMR9uE3o4U10QSIZgJMyRHQ+wQJXo0aGhYU+neFvC9X1yQbTUmZ1n2xWsmk1I3/xs5/9UoObzFsFE+WFkOe2TGyUGAKr4+jjHtsdTD3yWAhVzdfiTlfo8Lk41+6me+SaTNXhvSR+tHHhqVA2v5HhsnVY8DxeZd0AqytV6y87yuX5Wy38ctbGxaHzAyah9MFSg== X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-Network-Message-Id: a8a19a48-a701-49c6-8330-08dac3ad114e X-MS-Exchange-CrossTenant-AuthSource: DM6PR21MB1370.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2022 06:22:11.5962 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: c2ylF2QBcPbdSugSRfyLTUi0wX4/iJuZAMpOy54HALKhmqvdVAnb00b+OJpkVTV0UYrbK8t+LwK3kqq+5ep4HA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR21MB1857 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749179783952464471?= X-GMAIL-MSGID: =?utf-8?q?1749179783952464471?= |
Series |
Drivers: hv: Add PCI pass-thru support to Hyper-V Confidential VMs
|
|
Commit Message
Michael Kelley (LINUX)
Nov. 11, 2022, 6:21 a.m. UTC
If applying the PHYSICAL_PAGE_MASK to the phys_addr argument causes
upper bits to be masked out, the re-calculation of size to account for
page alignment is incorrect because the same bits are not masked out
in last_addr.
Fix this by masking the page aligned last_addr as well.
Signed-off-by: Michael Kelley <mikelley@microsoft.com>
---
arch/x86/mm/ioremap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
On 11/10/22 22:21, Michael Kelley wrote: > If applying the PHYSICAL_PAGE_MASK to the phys_addr argument causes > upper bits to be masked out, the re-calculation of size to account for > page alignment is incorrect because the same bits are not masked out > in last_addr. > > Fix this by masking the page aligned last_addr as well. This makes sense at first glance. How did you notice this? What is the impact to users? Did the bug actually cause you some trouble or was it by inspection? Do you have a sense of how many folks might be impacted? Any thoughts on how it lasted for 14+ years? For the functionality of the mapping, I guess 'size' doesn't really matter because even a 1-byte 'size' will map a page. The other fallout would be from memtype_reserve() reserving too little. But, that's unlikely to matter for small mappings because even though: ioremap(0x1800, 0x800); would end up just reserving 0x1000->0x1800, it still wouldn't allow ioremap(0x1000, 0x800); to succeed because *both* of them would end up trying to reserve the beginning of the page. Basically, the first caller effectively reserves the whole page and any second user will fail. So the other place it would matter would be for mappings that span two pages, say: ioremap(0x1fff, 0x2) But I guess those aren't very common. Most large ioremap() callers seem to already have base and size page-aligned. Anyway, sorry to make so much of a big deal about a one-liner. But, these decade-old bugs really make me wonder how they stuck around for so long. I'd be curious if you thought about this too while putting together this fox.
From: Dave Hansen <dave.hansen@intel.com> Sent: Friday, November 11, 2022 4:12 PM > > On 11/10/22 22:21, Michael Kelley wrote: > > If applying the PHYSICAL_PAGE_MASK to the phys_addr argument causes > > upper bits to be masked out, the re-calculation of size to account for > > page alignment is incorrect because the same bits are not masked out > > in last_addr. > > > > Fix this by masking the page aligned last_addr as well. > > This makes sense at first glance. > > How did you notice this? What is the impact to users? Did the bug > actually cause you some trouble or was it by inspection? Do you have a > sense of how many folks might be impacted? Any thoughts on how it > lasted for 14+ years? > > For the functionality of the mapping, I guess 'size' doesn't really > matter because even a 1-byte 'size' will map a page. The other fallout > would be from memtype_reserve() reserving too little. But, that's > unlikely to matter for small mappings because even though: > > ioremap(0x1800, 0x800); > > would end up just reserving 0x1000->0x1800, it still wouldn't allow > > ioremap(0x1000, 0x800); > > to succeed because *both* of them would end up trying to reserve the > beginning of the page. Basically, the first caller effectively reserves > the whole page and any second user will fail. > > So the other place it would matter would be for mappings that span two > pages, say: > > ioremap(0x1fff, 0x2) > > But I guess those aren't very common. Most large ioremap() callers seem > to already have base and size page-aligned. > > Anyway, sorry to make so much of a big deal about a one-liner. But, > these decade-old bugs really make me wonder how they stuck around for so > long. > > I'd be curious if you thought about this too while putting together this > fox. The bug only manifests if the phys_addr input argument exceeds PHYSICAL_PAGE_MASK, which is the global variable physical_mask, which is the size of the machine's or VM's physical address space. That's the only case in which masking with PHYSICAL_PAGE_MASK changes anything. So I don't see that your examples fit the situation. In the case where the masking does clear some high order bits, the "size" calculation yields a huge number which then quickly causes an error. With that understanding, I'd guess that over the last 14 years, the bug has never manifested, or if it did, it was due to something badly broken in the caller. It's not clear why masking with PHYSICAL_PAGE_MASK is there in the first place, other than as a "safety check" on the phys_addr input argument that wasn't done quite correctly. I hit the issue because this patch series does a *transition* in how the AMD SNP "vTOM" bit is handled. vTOM is bit 46 in a 47-bit physical address space -- i.e., it's the high order bit. Current code treats the vTOM bit as part of the physical address, and current code passes addresses with vTOM set into __ioremap_caller() and everything works. But Patch 5 of this patch series changes the underlying global variable physical_mask to remove bit 46, similar to what tdx_early_init() does. At that point, passing __ioremap_caller() a phys_addr with the vTOM bit set causes the bug and a failure. With the fix, Patch 5 in this series causes __ioremap_caller() to mask out the vTOM bit, which is what I want, at least temporarily. Later patches in the series change things so that we no longer pass a phys_addr to __ioremap_caller() that has the vTOM bit set. After those later patches, this fix to __ioremap_caller() isn't needed. But I wanted to avoid cramming all the vTOM-related changes into a single huge patch. Having __ioremap_caller() correctly handle a phys_addr that exceeds physical_mask instead of blowing up let's this patch series sequence things into reasonable size chunks. And given that the __ioremap_caller() code is wrong regardless, fixing it seemed like a reasonable overall solution. Michael
On 11/10/22 22:21, Michael Kelley wrote: > --- a/arch/x86/mm/ioremap.c > +++ b/arch/x86/mm/ioremap.c > @@ -218,7 +218,7 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, > */ > offset = phys_addr & ~PAGE_MASK; > phys_addr &= PHYSICAL_PAGE_MASK; > - size = PAGE_ALIGN(last_addr+1) - phys_addr; > + size = (PAGE_ALIGN(last_addr+1) & PHYSICAL_PAGE_MASK) - phys_addr; Michael, thanks for the explanation in your other reply. First and foremost, I *totally* missed the reason for this patch. I was thinking about issues that could pop up from the _lower_ bits being masked off. Granted, your changelog _did_ say "upper bits", so shame on me. But, it would be great to put some more background in the changelog to make it a bit harder for silly reviewers to miss such things. I'd also like to propose something that I think is more straightforward: /* * Mappings have to be page-aligned */ offset = phys_addr & ~PAGE_MASK; phys_addr &= PAGE_MASK; size = PAGE_ALIGN(last_addr+1) - phys_addr; /* * Mask out any bits not parts of the actual physical * address, like memory encryption bits. */ phys_addr &= PHYSICAL_PAGE_MASK; Because, first of all, that "Mappings have to be page-aligned" thing is (now) doing more than page-aligning things. Second, the moment you mask out the metadata bits, the 'size' calculation gets harder. Doing it in two phases (page alignment followed by metadata bit masking) breaks up the two logical operations.
From: Dave Hansen <dave.hansen@intel.com> Sent: Monday, November 14, 2022 8:40 AM > > On 11/10/22 22:21, Michael Kelley wrote: > > --- a/arch/x86/mm/ioremap.c > > +++ b/arch/x86/mm/ioremap.c > > @@ -218,7 +218,7 @@ static void __ioremap_check_mem(resource_size_t addr, > unsigned long size, > > */ > > offset = phys_addr & ~PAGE_MASK; > > phys_addr &= PHYSICAL_PAGE_MASK; > > - size = PAGE_ALIGN(last_addr+1) - phys_addr; > > + size = (PAGE_ALIGN(last_addr+1) & PHYSICAL_PAGE_MASK) - phys_addr; > > Michael, thanks for the explanation in your other reply. First and > foremost, I *totally* missed the reason for this patch. I was thinking > about issues that could pop up from the _lower_ bits being masked off. > > Granted, your changelog _did_ say "upper bits", so shame on me. But, it > would be great to put some more background in the changelog to make it a > bit harder for silly reviewers to miss such things. > > I'd also like to propose something that I think is more straightforward: > > /* > * Mappings have to be page-aligned > */ > offset = phys_addr & ~PAGE_MASK; > phys_addr &= PAGE_MASK; > size = PAGE_ALIGN(last_addr+1) - phys_addr; > > /* > * Mask out any bits not parts of the actual physical > * address, like memory encryption bits. > */ > phys_addr &= PHYSICAL_PAGE_MASK; > > Because, first of all, that "Mappings have to be page-aligned" thing is > (now) doing more than page-aligning things. Second, the moment you mask > out the metadata bits, the 'size' calculation gets harder. Doing it in > two phases (page alignment followed by metadata bit masking) breaks up > the two logical operations. > Work for me. Will do this in v3. Michael
On 11/14/22 08:53, Michael Kelley (LINUX) wrote: >> Because, first of all, that "Mappings have to be page-aligned" thing is >> (now) doing more than page-aligning things. Second, the moment you mask >> out the metadata bits, the 'size' calculation gets harder. Doing it in >> two phases (page alignment followed by metadata bit masking) breaks up >> the two logical operations. >> > Work for me. Will do this in v3. Kirill also made a good point about TDX: it isn't affected by this because it always passes *real* (no metadata bits set) physical addresses in here. Could you double check that you don't want to do the same for your code?
From: Dave Hansen <dave.hansen@intel.com> Sent: Monday, November 14, 2022 8:58 AM > > On 11/14/22 08:53, Michael Kelley (LINUX) wrote: > >> Because, first of all, that "Mappings have to be page-aligned" thing is > >> (now) doing more than page-aligning things. Second, the moment you mask > >> out the metadata bits, the 'size' calculation gets harder. Doing it in > >> two phases (page alignment followed by metadata bit masking) breaks up > >> the two logical operations. > >> > > Work for me. Will do this in v3. > > Kirill also made a good point about TDX: it isn't affected by this > because it always passes *real* (no metadata bits set) physical > addresses in here. Could you double check that you don't want to do the > same for your code? > Yes, we want to do the same for the Hyper-V vTOM code. And when this full patch set is applied, we're only passing in *real* physical addresses and are not depending on __ioremap_caller() doing any masking. But this patch set is executing a transition from current code, which passes physical addresses with metadata bits set (i.e., the vTOM bit), to the new approach, which does not. There are several places in the current Hyper-V vTOM code that need changes to make this transition. These changes are non-trivial, and I don't want to have to cram them all into one big patch. By making this fix, the current code continues to work throughout this patch series while the changes are incrementally made in multiple individual patches. But when it's all done, we won't be passing any physical addresses with the vTOM bit set. Note that current code works and doesn't hit the bug because the global variable physical_mask includes the vTOM bit as part of the physical address. But Patch 5 of the series removes the vTOM bit from physical_mask. At that point, the current __ioremap_caller() code breaks due to the bug. By fixing the bug, the current Hyper-V vTOM code continues to work until all the changes can be completed (which is Patch 10 of the series). Perhaps it's convoluted, but basically I'm trying to avoid having to merge Patches 5 thru 10 into one big patch. And since the current __ioremap_caller() is wrong anyway, fixing it made everything smoother. Michael
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 78c5bc6..0343de4 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -218,7 +218,7 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, */ offset = phys_addr & ~PAGE_MASK; phys_addr &= PHYSICAL_PAGE_MASK; - size = PAGE_ALIGN(last_addr+1) - phys_addr; + size = (PAGE_ALIGN(last_addr+1) & PHYSICAL_PAGE_MASK) - phys_addr; retval = memtype_reserve(phys_addr, (u64)phys_addr + size, pcm, &new_pcm);