From patchwork Wed Jul 19 12:18:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 122594 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2408073vqt; Wed, 19 Jul 2023 05:41:17 -0700 (PDT) X-Google-Smtp-Source: APBJJlEhll/5kJDiF13LdXtVFoAm4YzL/TuS1nh1AzwDN27qVwD135xTBD+VT1IqipdAsjxO8hKG X-Received: by 2002:a05:6512:3c9b:b0:4f9:a542:91c with SMTP id h27-20020a0565123c9b00b004f9a542091cmr4633589lfv.3.1689770477544; Wed, 19 Jul 2023 05:41:17 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1689770477; cv=pass; d=google.com; s=arc-20160816; b=DkNL+0DO/wtAAdYor8cDdB9kwYDjqynMfyLfNidq0Jc3+QLGc1s1FDO4xaiOmqf26j bwY0zrPW2Co0K6cX2ZrKo6HkLD9dqNMr7SbAjhpyfjDi7Z0i0auA3mwAvs+vbm4oxZd0 0Y9Rbm1QG9T0SkpEVK1iXuyyoWgfopmKoc4eFDDvZSRxNRe+52lVyWOgCsRWtwS/K0jI yvegRnCvxXzQJ1ocmSV9MlxJ3lmDEV2fhc9w4dQU7EjF60d9FacAGbKsEtmXMp015/Yi BG6EpXK6kax+jfIjReNkayLSUCOqGDNuTdEI+kRioJ9Bjou0CRt6OVeGVPIl4ogM+EFK lcwQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ppehoSrCgyLFyaJH/ZGUibIGa4K8lHOftxZoQu5p944=; fh=Chx3SWIwiO39BX7z7ABHi4LqANrzV6exs4vimpn14eM=; b=sy/7yXGfU6N1rzmdYK0MwAoo6vFtH6J5qh8Kfunv1XaFeIVMBmsIB+H+Zbg/eLUT39 iAviRbT43JQMjrRe3bcaM+3cR7UeMlvp4XHxMjDK4Sp3x7CEpPAOg7Eufb8rKHICDol8 zd3RJ6WP5q7I/f9QxMlG91QtMOXJ9qEqW3bW04yoUCtgzgdTjXWqOx5E5OennTcNDvT8 MqBiJD7D6dAEB7SFwgfPS0UYNXyCESi9BrkwxXRF2/sCVXJGcI7KlRoRAh6R9Og6q6fE N5RqzWwCTQztTc3ciBBRQVxQQLIp1q1gXUa7aDJ6MhwnN1CWELSB0h9yRUnjzXQvsBUm 9S+A== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=c+31D654; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k18-20020aa7c052000000b0051bfc119d36si2879002edo.85.2023.07.19.05.40.54; Wed, 19 Jul 2023 05:41:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=c+31D654; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230195AbjGSMT2 (ORCPT + 99 others); Wed, 19 Jul 2023 08:19:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230168AbjGSMTW (ORCPT ); Wed, 19 Jul 2023 08:19:22 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2081.outbound.protection.outlook.com [40.107.244.81]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1C5813E; Wed, 19 Jul 2023 05:19:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=b131MAFsMI4iP1G/lYIB38pTIjPC1ynPI8l1tZHeOu+nzOBJYfkReZHZ/Jzo7bOsc9vJOsz7UBhfrs4aQSfeSo2q5Es45V6r0NYfuEjM6oQs0WakHVAeqNSkkSdFv1ZhGNA500rIxHSgbXTVl9Vcb744MO8YkIQIiTtXUgjrEbvqSYpQHTTkt/RTA4yH5Vv+k8qejiWy/hpss8Or0RXgkCC1OQqAB36Mx3fMJj1aPGJwoStOBcLYxaEelUpF7Cdim5Mo/Kbj2Y3/374c9Phe+8ojHLbb59i9TodIWMPJgZa9yGDQslbSzMbmJjfOrJ9/ETpqdw6mu2Yen4C4YMqjLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ppehoSrCgyLFyaJH/ZGUibIGa4K8lHOftxZoQu5p944=; b=BT54h5T60jYLo3H78uMPwNgtyqW8tq5/WEvk/O5wOMCae3hCzM3oFodzxt0+rtsSK4Z3S81KrIdfTz8L3NQJkg1MscQHErO5E245cm6lbhsQ3MYVaAG9YGn7OOrSbxR3vnjzBojGqWAPuU4DeLNhdU0V+tUfgJ2xWhZKNY1lErnYCAJJrRUbB6uSKcuiB5Ls2C3vemn2qrn0glUbVGWSVEl7sSlJa2+3+26qiHP5VokaQGzjyKnmW0uuTu7ZHquI54Eo01BtDNrnEDvKWRkqicn+1ImaRETtPBFxSbx+0oeQo41EJm6RR00sbwjuYlFG3iuz4P2RrI6ICBnOMmU3+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ppehoSrCgyLFyaJH/ZGUibIGa4K8lHOftxZoQu5p944=; b=c+31D6548hQuFlZ5g4sEt6JB6ljenByhRfbwshlU0wpYiN2tB3BV3lhw8WTNBD8K3vOPlfaOYTE+412Jk5myRuCHREPdb/GNf93EjKkTm/d01Ka5EybFtFclTVh5ZKxAcgUsbyFUR3wCS0KgpqgKVPzchZzcjsCUNWpQQiRtmxEhZChN1F7eTGIKrT1V845aT4MK1G0qC9sHApt7JwvaQCuRdJbYSwsKFSP2o5Ctn5oVqXDpnJ783yi2GuqOGCNWGl3wwOANtCRiYkMg1CcPUemY1bNO5Z/EnOMI35OB//AdE4zvJ5X57Ak4n0Z2jRBL9w+AvJD+/Rx+J2c4cNa2yQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM4PR12MB5359.namprd12.prod.outlook.com (2603:10b6:5:39e::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6588.33; Wed, 19 Jul 2023 12:19:19 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74%7]) with mapi id 15.20.6588.031; Wed, 19 Jul 2023 12:19:19 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: ajd@linux.ibm.com, catalin.marinas@arm.com, fbarrat@linux.ibm.com, iommu@lists.linux.dev, jgg@ziepe.ca, jhubbard@nvidia.com, kevin.tian@intel.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au, nicolinc@nvidia.com, npiggin@gmail.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, x86@kernel.org, zhi.wang.linux@gmail.com, Alistair Popple Subject: [PATCH v2 1/5] arm64/smmu: Use TLBI ASID when invalidating entire range Date: Wed, 19 Jul 2023 22:18:42 +1000 Message-Id: <082390057ec33969c81d49d35aa3024d7082b0bd.1689768831.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0059.ausprd01.prod.outlook.com (2603:10c6:10:ea::10) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM4PR12MB5359:EE_ X-MS-Office365-Filtering-Correlation-Id: c6dd3030-fd0c-4978-31a7-08db88526064 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2WTpq48w4mntvTdfTUQIrzTyYihN13tAWf/l47Yu7vSfX/hhDubHEQGAaBuYs7EfqS034z4EitMKFgPiZ8Wrg9jq3hJHofLDKbeBtWUhmBVG93IFQ96/hN4OUTzUN0sKqRDsSnoBwLKrPjaNzjv/kVtff0/yDnRbnm07mv/vaOnUGhtmYgUpg30ETaPPAgKiW8YjqsU8G3gWt4+ra0I4G6mjgWQDUCzQwpToVxQcvz56pxumUAr22qRQOhJtvQELq8mFNdqgzQ5jSRga50WqEVxBlsWMeUhuqaxRvbAjGuwLo7cso5Xr979jvy8Tz12W2EQjoh1JWreM4TtaW8mL5OqZacVhMzrqn+dEnKOZ6MPEQq0FDpn+0G6jbmFwW6QOMSajMYLiMUO2VNAcI1aRALz4IeSvPqYoreWpP++jno1jCTav0tkbEvG3CjeeUVKVgeC+JnL2b6d2pAMTFF0zXNgxQIXUvXkv+a4RlTC4jogX5pVIBRNlRDUt7QFsfIuvawa52phwI3PlYsEQ+5Kdd4VwR5DjfEVrWPeeDatYUCx4OEbta+A4lBwqPTM5qNGW X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199021)(26005)(5660300002)(8936002)(478600001)(8676002)(6506007)(83380400001)(2616005)(107886003)(2906002)(6512007)(38100700002)(316002)(186003)(7416002)(66946007)(66476007)(66556008)(6666004)(6916009)(4326008)(6486002)(86362001)(66899021)(36756003)(41300700001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: bQkov8O6vBYQf+/MADu+y2q5tsi33513Is6P38978q+2fpzMaqOK68yyxf1j8io1Yd7QHVzDqErhcn3ibomWT5oE5g9wunLAotgyEbh9Ly+ZmPi6rV0nhOOKo3r2S92do6Pw+4hfRCptw9QX+PHMFqwZMxw28/1IfacuQ7Z5e5phBR6dpjruAbi5u3dz08bgM5Nuw82LR0zHu3dZFO9Fy9+d69TiFqKnHPberyeyjOcQrS8QUzsqOBymnG5DsS+6o7CP7KFypQs/9kSZWh82l+W5w9HA+BuEzMoo1VrzEX2Tbhlsbkzio0SnVwkYqOgLNPmaMms79PjYlFQySOQ7aXKH8XIwvJWDSiuVXYG0AJxAoW+fddW6eNLYu2YlmleTSb2L9VOA60a5nHb84hmk5Z0U8IQqUrlzL47gAs3uDo9MSQb8dzWiXIOGkJvpMDDPDoP91rHx8RDjlnt0BhXtNlqHuT70m+PFQPq+ASWQ1SaMi6137fWo5swc+qISbGf/yYk4C/QGZYbObUk8Pnd2TGcQWaxprJpHEGs5hdse5XhoABUayx40zy8m1Ci7c6rEbFwHRIMsARi+hRK78EeVkNtVNqH9ITSBHj+///xX3F/KHPpPg+y6eKsALfCfHr6+0BMsDyNvkO7QNPvzk0vDNs+8U0GTwuu2on/A52FJ7mcsK64hTjQDtP5Ss96SXaLcdB7hg6d4VNfmiCDK4WV1aXmWAJJzHGsC3R0RAQrwD7SFAqhyhR2Yx/UANDTFEzdYGpVE5jXJqdQVFFsSNzlzYcXQHOBYbXzgULY8m/DF1w16TDGWBuSEmfNYIkORqGFKwpjMPc41e2P1chMDbAmyX0bYvI8zi5NqV0qmpfayp45bZE6SPjrscRHrkVQwLcT1pptprYqmFyR3eVFBVDMpAnVr2NBeAyDD1BzxV0l7V7VoVkHA1Al4UfMN8muLDOOheurwlRIbq0hk3IGqSRBNGPHm5p2DrG8qkEGZhFSKRvucyZZZzrCcLGkmMacZX8V4WeRV8pkypgRu64Gy3dj9S9M+EFCq1ViDKhqQ9F5lvoZVvmUyptyv3lVVIsA6GbGd8As+3O3oNH7TV2VeHfgr5EDhS6xKeXYBbIoCEtPSSfoTj55yYkltVkIPu51piOOjLdOfaOwWsXS1+IQbKdFLU1Sj5s66G90K91YrjAq8MowgD/V0+XChyAmC89UOGtaqE1SLuPEtDJW4u9iZxg8MHQxPTg/DzL6fHf6XHXEnNGpxbl1mWMEJoxQTVL5i0v8iae3Tst7rSSYF4nkWV126U8UliZwsIpsUzZuXgAnw2WULrrSTPKkZb7gilmW8v0NMbJnginGXmD+qfiYFT+OiOtGQJuK07N8pqG9VuHZp7rcqjpyDHYG0jAk1PDY6ddGspVPd3USn8oSdB+pdDF7nTfKufvjYutNN9hxN7EnOXZh/YJMP3rLNcQwSmJNJqM2DJSa5XPQoRsmtYZ7Hjq3dRsEXT0QAPJHUeo9aTcP5/adiLSDqABCPzihSakPIg2B7AP8TOUCRZ7ppFo/8Tm+eGUN+s00VcvWvks/yZtETFxDEopV5O47T/2T9ZDRt5cRr X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: c6dd3030-fd0c-4978-31a7-08db88526064 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2023 12:19:19.0156 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: McZeyUjgJe1+zjnXcVMVHF1dI6z00ev3axEmdLzDzFWbq7WtqRMAHvf64pqpOx3FxaV3qHNlCKipwFZwuyJkkQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5359 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771852767750705649 X-GMAIL-MSGID: 1771852767750705649 The ARM SMMU has a specific command for invalidating the TLB for an entire ASID. Currently this is used for the IO_PGTABLE API but not for ATS when called from the MMU notifier. The current implementation of notifiers does not attempt to invalidate such a large address range, instead walking each VMA and invalidating each range individually during mmap removal. However in future SMMU TLB invalidations are going to be sent as part of the normal flush_tlb_*() kernel calls. To better deal with that add handling to use TLBI ASID when invalidating the entire address space. Signed-off-by: Alistair Popple --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index a5a63b1..2a19784 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -200,10 +200,20 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, * range. So do a simple translation here by calculating size correctly. */ size = end - start; + if (size == ULONG_MAX) + size = 0; + + if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)) { + if (!size) + arm_smmu_tlb_inv_asid(smmu_domain->smmu, + smmu_mn->cd->asid); + else + arm_smmu_tlb_inv_range_asid(start, size, + smmu_mn->cd->asid, + PAGE_SIZE, false, + smmu_domain); + } - if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)) - arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid, - PAGE_SIZE, false, smmu_domain); arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size); } From patchwork Wed Jul 19 12:18:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 122595 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2408233vqt; Wed, 19 Jul 2023 05:41:32 -0700 (PDT) X-Google-Smtp-Source: APBJJlGLeifV4cN8Vq/oHZZXQb+K1LY3r/rXAKkjsOI+Ys3/O6xAUiKIlpPSeTxm3OckYzxfpxW/ X-Received: by 2002:a50:ec97:0:b0:51e:527:3c64 with SMTP id e23-20020a50ec97000000b0051e05273c64mr2511353edr.16.1689770492649; Wed, 19 Jul 2023 05:41:32 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1689770492; cv=pass; d=google.com; s=arc-20160816; b=KqsdHmm7X30VwTskdoMlCqkdhQMnbn89xyayTM8E174kEbpmm+4UVioXIP0oO5J3dm 90xJO19Yrmknox+ndA+Uhk/YyNPc3cWh3gjX3OFl2kT/U/7IhooMzbuPAfz2Sdf3mSFd D0cUcv6NAUKZv3B3uN2MSpAs0sBVU9LW0qpxyGJK8aGKd363Yy8jD08aMr848Vw2lNtY i2Qbj2K3woDIz8Zbiphh9um1z9s9UultPO544TdodkDZpF50o5qonlCESZdsD4ZAn0QT Ikbj3YMdeELRjJdTd28U4HqW1pjHKJnpAdl0N7YwiX6ty+JqjdJCffJy2Ny/vFgElSfn tq1w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=D/cCYj8Xd7PDxmlOmxc7ZGo63wZ++XgwVNU3j4KQgOc=; fh=Chx3SWIwiO39BX7z7ABHi4LqANrzV6exs4vimpn14eM=; b=qDj4GSG9D+W7Rco55qaVBbv7eigs+luOM1NeKLUt95IvTXw1c5Rk/HJiVLuyJUxS/k IxoZyfG9/M58FmdB/sdMAHdPik3IhTtqWEvDD+NceKo/9mOauYVegise0mOzyOUMfCzl xCj8S0vaN/HHn9KYHzevOeFYokJOzsRUoQ0d0pwyerdOfyLEHNmomyuLGuTlI6Bm7MUM GNL2KhG1cKG7tCRKxf6PmM4D5xZPoowJ33260G8g3AvKPDMqgGmDl4JEH9eIRq6eN6XS JUsLojEjydkFWoh/F9FDeFzq4ZQW1atBMh1ZvFVj37FOegHax7aRS17ynjhB3AR53Phf vKSw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=ES6Mv60E; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p2-20020aa7d302000000b0051e26704e7esi2955894edq.38.2023.07.19.05.41.08; Wed, 19 Jul 2023 05:41:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=ES6Mv60E; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230258AbjGSMTb (ORCPT + 99 others); Wed, 19 Jul 2023 08:19:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230198AbjGSMT3 (ORCPT ); Wed, 19 Jul 2023 08:19:29 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2051.outbound.protection.outlook.com [40.107.244.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E5C1171D; Wed, 19 Jul 2023 05:19:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TKMbvPwvpAweqgY/RvylKOCqKTJq0H9jmQH255iwaYeV5h2MRuIlenrEysloJY0aFgCHjB9+WRryG/w2B3+qwwMgktQmdaspmfseIwda8JCbWavwqyeMaoKAJmiOiUUEC1bJ5KVaeIx35wfYQ7kJyF2UuY9Nn9Dxh4gU5ZToepUYSLqejip7JiL4PPvZrnoQ6AkkIVfZYJPCJoch++Hlx3r/1UJIKQFMfJCQRKJzOPKubOP7iLJzB29+iO3HMAHDFCiWX6vhLT3Ao6V4k+5bfV2jdnM13knMkMtgr4WMtnw3zqN40j158mI1liXcUd4tnGzJuvoyzCMV5hKqEA4+PQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D/cCYj8Xd7PDxmlOmxc7ZGo63wZ++XgwVNU3j4KQgOc=; b=fuBoXw++8+YlmiA5oHZvxI0n23X2qrmeM6wFiFD4XWRA4W2WloMkZvHgJA06MOGwiT8GqniE8dpSEnDf8hdBECjC7aYeYJ9rLqBIZC++E8KWgS7G7u2uMyaP0sGz5mT9mVguX3zYUS391bW6Gdw11f038MGz2rTJqRpPSyWa4nNmWnvVR/k4NfTuvHZDAWJ0+EpmZ+A2x349skJ05UBWIua/8mNc+4plolSf2bzNCf48yDtwhG4BgvIhqfhzUGiWoC/s8IOHCmPXmgQav9YHhiPA9ZZUeQ9BqD8S1QTst32LLWgZ+cJxmG8mAAOq6EWuyJHq9UnPdqXhFJ3//xuuCg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D/cCYj8Xd7PDxmlOmxc7ZGo63wZ++XgwVNU3j4KQgOc=; b=ES6Mv60EaoeNnd2UXqJedDKsKRF5QyUWHkcwW5VbVt3i7RCs9zJZlyNlXmX2phyMUSn9wGM6n1jUrn9Xwmli/fX01cfgereLIxFi8vGS1/c/41DDQp71sQiestpW+a7ilYHfSER7FeEZY+wMdNPuFosvT5uKKf1jUaUmSf7kGAN5QBIiWC9g+8UrU9rw0C2qYGtY1TjfOQ/He5oMQYkTH+WuppbPk3+nbjWZdm6YfPBP93N7YkD8AQ3X0lrIyk8iUEzBX+iylirkIf6EwZwCgVrVoWOtD5oqqL1CvbeGU+QkAvlDos5lvVoFGLh7nOv/WynWNvKgaTEPJ+wv+cqRWQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM4PR12MB5359.namprd12.prod.outlook.com (2603:10b6:5:39e::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6588.33; Wed, 19 Jul 2023 12:19:25 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74%7]) with mapi id 15.20.6588.031; Wed, 19 Jul 2023 12:19:25 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: ajd@linux.ibm.com, catalin.marinas@arm.com, fbarrat@linux.ibm.com, iommu@lists.linux.dev, jgg@ziepe.ca, jhubbard@nvidia.com, kevin.tian@intel.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au, nicolinc@nvidia.com, npiggin@gmail.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, x86@kernel.org, zhi.wang.linux@gmail.com, Alistair Popple Subject: [PATCH v2 2/5] mmu_notifiers: Fixup comment in mmu_interval_read_begin() Date: Wed, 19 Jul 2023 22:18:43 +1000 Message-Id: <06fa82756e4d6458895962a7743cc7f162658a54.1689768831.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0038.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:206::7) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM4PR12MB5359:EE_ X-MS-Office365-Filtering-Correlation-Id: 1cef8a73-7383-4906-8c12-08db88526460 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xYM3m+2pxTxkC8LtX6bSv4+1tdfbzlTasnOCGRndjzlB2STFoCc3lyTs2+ekxgOm6TytWhcZqaJOdsD3kJeTxIyW4uHgqS7VD7vpet+2uczCLvZqjCT4QS3IPbokBVoKmuNC2gI4ZJoIxztSLxCFW9QacCuX6n5iDFumGswI0vcOZusc3UbiQfK5Y89s5vjy++1E+/ktjZb6F483lQhJyNYxYfRS054TB7qRaa9J8cdzrCeRD12CrF652YcGVDJbvkJSRppWIC6o/zgqAtga/WIuYSCXg1yNJgwiDqr16pMDywFXrObQ5sAtNcI9RHwny0KVtKuIuoDOB3oZo7fIhJZglgaGR61IQOiBpKCx0HbnUjxyRmV8lQtlXYmsyk25QRh6wUdx83jCaKlxCJXJVrNZXdJW2/r+W+GILlwFSmpac/nYKC45L2gwaT78pw4aKQOsNEyc1/ULVG3DxUTX0V+zOhY579qp+p2vkp5jOO27kLZArBdIo/14ZHoGdxaG0DeR2+vKNWxyo1Y7JVs7SNmyjZ5+4YQOgnbPpWSwU/Ip39KKc7brGGw/sjPF6kJX X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199021)(26005)(5660300002)(8936002)(478600001)(8676002)(6506007)(83380400001)(2616005)(107886003)(2906002)(6512007)(38100700002)(316002)(186003)(7416002)(66946007)(66476007)(66556008)(6666004)(6916009)(4326008)(6486002)(86362001)(36756003)(41300700001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: I8Hp/XEt1php+k0PQt8VT07oZc/s69qy3zImfldRciLIoGjCKG/xIEnCP9UNmmn0WSw9gDx6uZomtAGZpPP1qCYRA1GZSo6i5UHujo8VDqNbFcSnm5N8ZU+Y1JE9+30sW8w0y8GZ0xxkwhI6u6jPGK6+gNvLzqQjJ8GfIIMKZMWsEYubErwpWi/6iBd77F9NYL9hPxzvfJmOZejAH4thW2wGyLZvCAz85aKcH07CYWbfN+FArp469VbccRU6i2bMUfA1pxITOh/o/31YqzeYzTNskRweJVtEJv45WHGHLIeCkeIRj/Fyvy+4lHwQ4aSjMpc74woH6N9CZULd3P533Gf5a79HOz7lgldCDHNjPs8ZAhWvwqNVgbFRtx3ojhAP6CTtv+3bsvhQP2uaisgO1pesvHqeXHNEi4GW5u2EbN4L5e1G8IeYahLhTzZmy8qayzonkp1oVDF9FkDrGIHWr8qksh9jK06zYvT4MutaJ4fNes5OFzWyJRYjwczNSUedy1UC5DVS7mItwFZgwEiKxmqI9NQTyH3vgCAwo/q7zyzjD6fjA/eeZolsi/qskohHazkkkLfOj92wUQc1Wyc2SOsDYrpReRHpLYGXUsiClD1F9s5CHsTJT+hgb2BhIL2FYhemWboEwqhUAXUbScPIid+GnpCSvyNO47gkECSAlMfz6m98d9p2C2yLXbU0Sr1Bzkpr1XW0DDguIsXgfS0eM/WO2tb+IMmDtMPuZkE3LBrt26WWVyO74FN+PaeM2kgADTJjPqjWIbISCBEJot7wdlTu7Wbhsm5itxs2/f+eMfl/0sb/WnGjxk1Mz1nwRoVMuncBe01PAYfBJnmEuktIDSkoR3SDFw/6cKFHBSIwoZPQfWqwqSHaDv4VjCdIbNFt2q1ZePaB22M2fS9qHwv7Jv8jqwe4AW8qKNGK+Ll0pnnYqiyHJZvV45DdJrt040anS1dhVmPQUo1DQyr+20OQShN9xyxpv6Gq6jaD6i8Ddn23wHBR5u5vsXvYTVUReDST3jPZEV60JxGoDBagg6FWwZqQ6H1zY9IwE+NmP4vL2xXY5VJ4P+QpYBQAYkkgia02txlm7dgblWunX/fkPsslB1rod5v2DBDgwwbrhPA9CF8jZh72O3OvG9koikX1gKsCiG4mBOvZliRFR0Jx77eTrq+zgVVWiBDX0W0TD1+R7+pNJ5otOu0kCFd5zHZChR7fjUQfcxymuzuZQI9hABotddF10IpjtfmSQG6xJnxnZOGHc5dCEZc2cre7d/RyTuPme1DvFeHmcxn+tQEvbvFCJMqSCuypjsweWGzgyGWw2BVOjWxxWB1WJ5rDPKsB87EdpqB1Kd3gEHPnO3KyFUmGLjBUo3iTuM8NFGUuF59RfenU0kvElvaWdOBSu/KrL1ETHimtBShgSvkWjExH3NdjVqDeszZ3ovEQoGIgic45+zUzqcd2P6ko5lHlEUbP2MASni7OUUm181LCiM0vbmR/i6QSnJEXZcYXT7Hx8moPOpvOoUqxwx8OL2tbNdE/WffUnYAXBl4onp6Vo+9XR252HlR6MdBcluv1AG0gIYlu/wSyQUJmnnkICDsPwQHwaQo1 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1cef8a73-7383-4906-8c12-08db88526460 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2023 12:19:25.7060 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: gwgSOklKX1mT7jpeoxQbtLqLIWiAzXuIO60vK6dEAeLZ57Hm/zrJM1sUAr4Gcgw9dwzjM6+vKuwzBF2wz6YABw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5359 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771852783868642740 X-GMAIL-MSGID: 1771852783868642740 The comment in mmu_interval_read_begin() refers to a function that doesn't exist and uses the wrong call-back name. The op for mmu interval notifiers is mmu_interval_notifier_ops->invalidate() so fix the comment up to reflect that. Signed-off-by: Alistair Popple --- mm/mmu_notifier.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 50c0dde..b7ad155 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -199,7 +199,7 @@ mmu_interval_read_begin(struct mmu_interval_notifier *interval_sub) * invalidate_start/end and is colliding. * * The locking looks broadly like this: - * mn_tree_invalidate_start(): mmu_interval_read_begin(): + * mn_itree_inv_start(): mmu_interval_read_begin(): * spin_lock * seq = READ_ONCE(interval_sub->invalidate_seq); * seq == subs->invalidate_seq @@ -207,7 +207,7 @@ mmu_interval_read_begin(struct mmu_interval_notifier *interval_sub) * spin_lock * seq = ++subscriptions->invalidate_seq * spin_unlock - * op->invalidate_range(): + * op->invalidate(): * user_lock * mmu_interval_set_seq() * interval_sub->invalidate_seq = seq From patchwork Wed Jul 19 12:18:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 122588 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2402413vqt; Wed, 19 Jul 2023 05:31:26 -0700 (PDT) X-Google-Smtp-Source: APBJJlGsyA1FDawYaLi9R3lVQ2GvyS/eclEfVkFEUQ4qIJdSPZ3NrQPwi6LytbUbpXw8A8x+g8wO X-Received: by 2002:a05:6402:3551:b0:51e:85d7:2c79 with SMTP id f17-20020a056402355100b0051e85d72c79mr2304825edd.7.1689769886361; Wed, 19 Jul 2023 05:31:26 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1689769886; cv=pass; d=google.com; s=arc-20160816; b=T7LfwWsJNou4/kTsRac/6pcAkFKGLq1qYUZ4WQoOu3ICBBz4S/AMLH+ThFrhrckJ8l cIJ8QSsz6T4zhQ+Q2C9Blm+0VCC0Ndnk/njNoa3ixh6ewdZe2IzDGNWEefX4Ml5p1M4h B56CdhkyQFSM67LyOYJLPwdREmqFi+yPzXD70Nc7E3/0Yq5/OsQrYQOSC0Px5SXR/vcU NtwfY5OusME1VYQoqtm+zow4NDT4Tr9x5H/4mSh8dfslD6e3wlfy95QY6PVdFcVVvMBI KGlRyD88SD++bP5zpAOsgxuE00G/1WhMRvaVYDOJNrgShiJubKJ96Jjjvv5gE12sri45 QStw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=E7ln65n/Hpq4F8/Xi4Ld2pMN/wvlMSEVhslNm/lYul8=; fh=Chx3SWIwiO39BX7z7ABHi4LqANrzV6exs4vimpn14eM=; b=ekXYrjsKBjH6o1tIisBo9wkO3F3fZCU17mmrFGhPRkRrks5jjdnuqsxck4WckfUMVx iwv1+6J581SwcRu/49pV6ezhRfB9igI/9Pud8ohGpFYib0cjbG6ps++O13OQjb7yvPOo zlIt+3Gsm7SNvwt+k/ZhKQpp2/zRJolPgpAb0Qitq/NJnk/ipSwKEajGug+B42Dj4O7t PughYw40q9akYIIt1JBJbhbUturxjbClioqjjQ+2HLHAaoQ6pN88T4t0gO+uP+u3diQ/ KfqDST58JoeZ/mly9kKnjhlsE62+f7ywFysC+vyJcicDizT9xvYlSc6QOTyBBLvx6MFl Gx6w== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b="Kk9d2h/U"; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q4-20020a056402032400b0051e065c1573si2943589edw.612.2023.07.19.05.31.01; Wed, 19 Jul 2023 05:31:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b="Kk9d2h/U"; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230284AbjGSMTq (ORCPT + 99 others); Wed, 19 Jul 2023 08:19:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230233AbjGSMTo (ORCPT ); Wed, 19 Jul 2023 08:19:44 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D282410E5; Wed, 19 Jul 2023 05:19:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lT4Hn+0PVVo3rDt+eWn5r8qKuc3X4dogYNmojWkqrNbDA1P1CF2PVBR+KP9NG0otGbj7ARI5HJc55SVuV+JM67c6GAkobLC0FeVrF4OqCYnkpV12zDjG1ESc0MGY0cx6iwCC7pbwCweSu4uquV00fikp4kFmcIudJke1wgtEHKC2kLkcs4A53vDGWz455m0+aiSdpgJqb9xF64CUMlUj6nHhLWFrvTmfPyDl8RNTzvwaxaCOeYwe6Xuw+uRgaCmM8KSyLc7uYEmLNxnKDBEWpSkiTY06ukY7hHmmTem5ccU/GoOCayLUJ441UgX1K4RTe8rbezmQh1R65trsPP7MIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=E7ln65n/Hpq4F8/Xi4Ld2pMN/wvlMSEVhslNm/lYul8=; b=av9IOjAfkTccCOaG7Roau4LZeCgkjJI8KG5801kht/tx2V1cHJMqWKWwjq7J8jgmcQy/hrVtXbSoEhpvkmRNKNBFAlMAwNaUJi6lOQEI0vCtc8KbLRHZ5gUBOUaoCy2oRAmQ1hjLH0HWqjf2Jrfhr+3uWprBa3UWKGrN7UehRte6ELSGZNsflVEqpaG0QqhnF7Sqwk6TycvyCJj/WH/4X3iYnmlTJdY9MBF1dcuJErEWKZTVO70UULtOIwrbTOsUboDaGUopmRWVAgA2YWpu4BLeEgT/PaK45dguTPpOfOSsp1r7X4pol/pODo2vMl1PLzDFe/+E5ZDSPFo9FAmQ0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=E7ln65n/Hpq4F8/Xi4Ld2pMN/wvlMSEVhslNm/lYul8=; b=Kk9d2h/UXu8tHao5JqM+1d+vCXPa9l8xGIjEBGf+Dct6lhHNPTrdeEji4/x6VUQd100CAFLhiJObP/IImT1pABxZlyFbd7cOUnIDDgU0mRFh8wBUa/3ocWYOjR4Qx3bFpCpPBCWQYy5I8LRZfufNB4M5bL0bu5f4QIJ/mOS9SuRhMmItD6CAqOGThlsaZk86z65UezDedI6bVe6rJQe6ZFPwcohIFlz5jAxfHhRf895ZaOc9HeAXn6b5eyFs3c3hnOKfmFCN9s6wJqTY59bQDAUzTpaG3KnGbu+FFz5l1IwaSJTR5/zDDl5QG+zYkEgZgfC3o60lKz78rZ3aeWziDw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM4PR12MB5359.namprd12.prod.outlook.com (2603:10b6:5:39e::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6588.33; Wed, 19 Jul 2023 12:19:32 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74%7]) with mapi id 15.20.6588.031; Wed, 19 Jul 2023 12:19:32 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: ajd@linux.ibm.com, catalin.marinas@arm.com, fbarrat@linux.ibm.com, iommu@lists.linux.dev, jgg@ziepe.ca, jhubbard@nvidia.com, kevin.tian@intel.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au, nicolinc@nvidia.com, npiggin@gmail.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, x86@kernel.org, zhi.wang.linux@gmail.com, Alistair Popple Subject: [PATCH v2 3/5] mmu_notifiers: Call invalidate_range() when invalidating TLBs Date: Wed, 19 Jul 2023 22:18:44 +1000 Message-Id: <8f293bb51a423afa71ddc3ba46e9f323ee9ffbc7.1689768831.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0013.ausprd01.prod.outlook.com (2603:10c6:10:e8::18) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM4PR12MB5359:EE_ X-MS-Office365-Filtering-Correlation-Id: bec1a96b-0b83-4e70-ea91-08db8852681e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EtM+koDEZOdci30wx6IY9TSUmvjAhX0GGnKtEVLQsdAbRVZUd+MGCxgdIMOuFqYh+h/8Hyrorrzu+RlGct/iQqczUAdCvxYHLODwTG4xiCHmZzyhMEk+mN8VK2+eDWOwGPyac50mHkkQph0eR2OAJZa8Q9vq1ROxaVi3Ovu7+GNfg4fVvnFJjs1vjXk/WLbqJ8MVEYDp584KLAVgXxJytMN7aWzE6EWiZHmH7xtKoTrIeyJigJS3Ri3CL1HNT1XysVZ+C5XXdQqPxrJppuMmsGpFkeZ1esW0e/eu90a9pMxCvzTaFRMdDtviCb09hddMupdIJyO6V5/OXc/N2jjVBGIx0io4MMixuSHz7EE37Ag2GgBckOZQIU5BUi8m/2SyySYpeHH9mGe++YuGesC+jwiX3hcTh4sdiu2TV2AAq8JI285sydnT3FyImVgGDCzfQldXTo0agCmT+vep+tzbTF/6u8t+0W2wIWVLlrxzf60AsSwmsA1j4S2k3uAalqVbAIfj7oQi1f4X1I5JfLl62x3QrZgykb/9jsnGxcyVqa8y0iYuwgbjfmGercoPHKU7zVJvJTJU01ZbKNTemzij4W94aQnpNTr0IBDXjSt3FgIB2sEDxwGndi2sAHw36JFd X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199021)(26005)(5660300002)(8936002)(478600001)(8676002)(6506007)(83380400001)(2616005)(107886003)(2906002)(6512007)(38100700002)(316002)(186003)(7416002)(66946007)(66476007)(66556008)(6666004)(6916009)(4326008)(6486002)(86362001)(36756003)(41300700001)(473944003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 60km7DvzxHdwrqsIpGcxs/Q3i0aOP5via7XTlHRCeP5RCZX+FGHUuIJNn+xBbR35KEFEOxglvfFYoIyEAnWKZLR0MvMYIXSgmkzc7VwQAZ7h+VFpxnKspk/OiAXOvVYiLVbdRir8AzBzaNYFjCuez2bTim4YDD5NIm37p/7+R8dOTXVU/4NyUpouXbLQOVVwA/bl9FRJuwP2jGXJj6ZPBNyoOb0rmd8mQObWwL2gQKHLmIRgAWZPEYdYIy7QyyPUQ8PLBnjSsSXXquaoTHxhEHyh9b8O/YIb1d5hcSXIxmhgFdN6svKIseLGTzPX6k5ySv9p8eO9icoqmI8g9aX1FAtavUlP4ZBBejE4jYnRE9HW/+ntEj/mg08RHLVPCuKZFpxth+heJcCXbLyv3wPCDngVmBlJTbjG4bvm8IJJawCWMB0eswXWhCchTqO4Tu4mfDiJJiyTLgRA7tT9XhmD1O8ILpBuHXP4EuvaMMCYu8U1w/Og323aovjE3ZV8IhD1FvoZfpFgINqxgP6IVBY4Y9K0vAqaNWYV2Kg6S7Sbrcm18BP6ORCCuexknZ5XDN3Q7YhXVY1V2cjfhyE1A8nUViX/xem5LflXdnSkAHnftKyfBgIQ4+RyrEZ8W1ABLwBa5xAH1hR/OLXHwtb+W4P6ZNhWsBzhMxrZGRpwo81xcgUpo0t54MztmLi7mVmxMOhwREEeG1Hu4ESKClxwcvRX7GsVsvW5dbKFTP8TaX4PgjQxmfrSzMalTDZMYVbhjOEMJznKnMt4zmC60l2E7DrTMGu7f6+UlhGkIjU6taoR6v6sx4rk3EYBUjUSkyINz/GpvzAT/fiT3ittFCtVWM2ZYaBpepYLzPug68AAFe7+Ex1M+jFvtgXxshdbwD9ij52VoESxEb8n+cc1OFeVI8x+m9fqMHOGYy9znsz4S/COsSacqQBPgkim7eawwfFLkGViHQhaiInNDUz6nlDiEaBkbwTmFjOMtwLHXL6oShdgK3lknRDLQJEyhi+P2YUzXkmeSjNi6LyGVAes62sqmtUfRx69oXc1Fn1sTv11NmqG06FGvwI3DU/V2zHAotTepKb3rfb8n9OWi9z4OHqgAOMlIyduLr7N86hlQsxRP2rmbfRJTp8K4YHflMkx57stGROgdX8PUqvsp3Z0ZygSUynv5Hh9hVntKZxXprD+V8V5RaI0qb02r62Qj/50pzFXW6C8Gk98cVffb8XvklDwte8B6Ui6NhjySxlYhbHAkl8moyLc+5N1nFCCeM6xw98xlmJRw+tUEY9e5PWXNx8pONa5clk6lMI1w0WhSbhPdfmcVKVT4TaitAAih/l0yCri8J7py1iPe1p5UaGkC5c0DFk046k96kHJ9IodIGj74ppWO60jq4BWCLIqn6VWCYom98FzuNYGF7saarzNMty8pxqCrAgVfvJFESAxu3KeL4aoYMmvkchNaV/kw+In3ynUw5w2uzp9WasuEZN8zFUUnnZa0+ivs1ZR7fHFmCod2W46vKDsqMIWl5CeO0WEbOpD6v//hLUZIrCBJ96SkwDYbk/adSDN70D36XVMo6IOW2d7/+cdPB4KcUzsjR0oI+bZNxFj X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: bec1a96b-0b83-4e70-ea91-08db8852681e X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2023 12:19:31.9847 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: H3fNEqTHibnay/bQtxw1p9JiEFoELxCfoOlQxAKb+feCYhlvr7bVZ3tkdpEzC6pcdqnILdxLUZXTMqEeJ2mCpA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5359 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771852148353734843 X-GMAIL-MSGID: 1771852148353734843 The invalidate_range() is going to become an architecture specific mmu notifier used to keep the TLB of secondary MMUs such as an IOMMU in sync with the CPU page tables. Currently it is called from separate code paths to the main CPU TLB invalidations. This can lead to a secondary TLB not getting invalidated when required and makes it hard to reason about when exactly the secondary TLB is invalidated. To fix this move the notifier call to the architecture specific TLB maintenance functions for architectures that have secondary MMUs requiring explicit software invalidations. This fixes a SMMU bug on ARM64. On ARM64 PTE permission upgrades require a TLB invalidation. This invalidation is done by the architecutre specific ptep_set_access_flags() which calls flush_tlb_page() if required. However this doesn't call the notifier resulting in infinite faults being generated by devices using the SMMU if it has previously cached a read-only PTE in it's TLB. Moving the invalidations into the TLB invalidation functions ensures all invalidations happen at the same time as the CPU invalidation. The architecture specific flush_tlb_all() routines do not call the notifier as none of the IOMMUs require this. Signed-off-by: Alistair Popple Suggested-by: Jason Gunthorpe Tested-by: SeongJae Park Tested-by: Luis Chamberlain --- arch/arm64/include/asm/tlbflush.h | 5 +++++ arch/powerpc/include/asm/book3s/64/tlbflush.h | 1 + arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 1 + arch/powerpc/mm/book3s64/radix_tlb.c | 6 ++++++ arch/x86/mm/tlb.c | 3 +++ include/asm-generic/tlb.h | 1 - 6 files changed, 16 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 3456866..a99349d 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -252,6 +253,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm) __tlbi(aside1is, asid); __tlbi_user(aside1is, asid); dsb(ish); + mmu_notifier_invalidate_range(mm, 0, -1UL); } static inline void __flush_tlb_page_nosync(struct mm_struct *mm, @@ -263,6 +265,8 @@ static inline void __flush_tlb_page_nosync(struct mm_struct *mm, addr = __TLBI_VADDR(uaddr, ASID(mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); + mmu_notifier_invalidate_range(mm, uaddr & PAGE_MASK, + (uaddr & PAGE_MASK) + PAGE_SIZE); } static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, @@ -396,6 +400,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, scale++; } dsb(ish); + mmu_notifier_invalidate_range(vma->vm_mm, start, end); } static inline void flush_tlb_range(struct vm_area_struct *vma, diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h index 0d0c144..dca0477 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -5,6 +5,7 @@ #define MMU_NO_CONTEXT ~0UL #include +#include #include #include diff --git a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c index 5e31955..f3fb49f 100644 --- a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c +++ b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c @@ -39,6 +39,7 @@ void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long st radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, end, psize); else radix__flush_tlb_range_psize(vma->vm_mm, start, end, psize); + mmu_notifier_invalidate_range(vma->vm_mm, start, end); } void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index 0bd4866..9724b26 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -752,6 +752,8 @@ void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmadd return radix__local_flush_hugetlb_page(vma, vmaddr); #endif radix__local_flush_tlb_page_psize(vma->vm_mm, vmaddr, mmu_virtual_psize); + mmu_notifier_invalidate_range(vma->vm_mm, vmaddr, + vmaddr + mmu_virtual_psize); } EXPORT_SYMBOL(radix__local_flush_tlb_page); @@ -987,6 +989,7 @@ void radix__flush_tlb_mm(struct mm_struct *mm) } } preempt_enable(); + mmu_notifier_invalidate_range(mm, 0, -1UL); } EXPORT_SYMBOL(radix__flush_tlb_mm); @@ -1020,6 +1023,7 @@ static void __flush_all_mm(struct mm_struct *mm, bool fullmm) _tlbiel_pid_multicast(mm, pid, RIC_FLUSH_ALL); } preempt_enable(); + mmu_notifier_invalidate_range(mm, 0, -1UL); } void radix__flush_all_mm(struct mm_struct *mm) @@ -1228,6 +1232,7 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm, } out: preempt_enable(); + mmu_notifier_invalidate_range(mm, start, end); } void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start, @@ -1392,6 +1397,7 @@ static void __radix__flush_tlb_range_psize(struct mm_struct *mm, } out: preempt_enable(); + mmu_notifier_invalidate_range(mm, start, end); } void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 267acf2..c30fbcd 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -1036,6 +1037,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, put_flush_tlb_info(); put_cpu(); + mmu_notifier_invalidate_range(mm, start, end); } @@ -1263,6 +1265,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) put_flush_tlb_info(); put_cpu(); + mmu_notifier_invalidate_range(current->mm, 0, -1UL); } /* diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b466172..bc32a22 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -456,7 +456,6 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) return; tlb_flush(tlb); - mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end); __tlb_reset_range(tlb); } From patchwork Wed Jul 19 12:18:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 122592 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2407716vqt; Wed, 19 Jul 2023 05:40:36 -0700 (PDT) X-Google-Smtp-Source: APBJJlF+UqYUl+N4TaxgysD7VtG2K9fbiRXaEQKPNDzlNpoebI13oXNJyuR8Y7rB8td5yHlfN7eQ X-Received: by 2002:a17:906:1046:b0:991:d2a8:658a with SMTP id j6-20020a170906104600b00991d2a8658amr2117484ejj.34.1689770436220; Wed, 19 Jul 2023 05:40:36 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1689770436; cv=pass; d=google.com; s=arc-20160816; b=BwY/TEKLE91PhmJ3bqEsOJWkpe3VhfYqRP7XLY3kPxXlOIJ/foTZHkxJCkMCNGJubJ CA7qhEY6R4jVp3P8TwmUCj3ENCJVLnfgGQyU4d/zOv+kSMwweTRQA3z7uaf3vK7PoNzu O/cvOijnD2mP9lp9QKVOHyHiZs7LI4nCBnrHAwRyxN8Y1v1pazbHsMa4104S/myIvO0Q BykDOh0jgZFdy7opNlLLA/T9yPB0c1KVguzPg5c+Q+rCBlqy5AT5l92NLWuj35P27aei hOVQrGLLZ/hwoBwzekE4o+x/338rqJrohZv3Ng6CohrpIbwHZutX94nSz61DDmIP65Mk 9XRg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eVZWmJyawWp7qJVj6fsL5exDFyKRe6K0X+0eHRtAuyU=; fh=Chx3SWIwiO39BX7z7ABHi4LqANrzV6exs4vimpn14eM=; b=jsAOwXaFJ6YCSjwokhcteElUR824BZZV5tUmMgnUQ5wWwmrC2PHBDe2/1zHK6c0MeA dFW20em0NQCU9St5cPlETt6vYDUcISXGuNtcctZLgp9IzFZP0GYjqMQqNvuWJ9MwptUX w1TI1olhnXl2NPi9COR4SjmIlakTS0FHBvot43ubCRAT54dUH0vDsHMYHieFsPzjfavJ tlSMrvT8VkLdZ8XkSqMbiXWsKuocWUyed7/cJntIuEgg2fkvRNp11qXztnIYfmX4lZKd Ln8cAFh9xyJEGcmNKinsRpkaOi+38aeHXB004ViLkA2fK+MZzySj32PkbMjQQkG03zTK pxKw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=qIkWGxH7; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qk16-20020a170906d9d000b00992dcae742asi2579467ejb.985.2023.07.19.05.40.10; Wed, 19 Jul 2023 05:40:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=qIkWGxH7; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230317AbjGSMUL (ORCPT + 99 others); Wed, 19 Jul 2023 08:20:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230180AbjGSMUI (ORCPT ); Wed, 19 Jul 2023 08:20:08 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09953E42; Wed, 19 Jul 2023 05:19:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mkSyCTWVoR37jmOAnWWvtQl57jsFLxSuo/yunrcyCDFIYLnEl7gNPBQb4zBzjLWS/oXAMUJdNwOXYo5nygSLGs7FkI8x1zpPBjFc+3h8Bd+oIAc2YwzwXlZapoEengzEzrLoNEOxoW3Q2B2s8baJZUHATwEHaKuvJa641vpACKB3IIez8IqT1NHYFpln7v7MXimNzmG8lwhdgNQVmLyluaPQQ9saZ0NUbfncPDRr2WSs0xXs/03S5bz3By04ExWAgnrr5uxtoeliCYluneD8uoVfBLtEUOeB2xIw70ipIhShnwQJG9ZwR8/oeAek1zoHDF6AeWglCisYul0fNq23uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=eVZWmJyawWp7qJVj6fsL5exDFyKRe6K0X+0eHRtAuyU=; b=V1ZutFK6Q8x/nr9TF06yH5fKk2C0IwKcAC1XyEqRu/mwS0lRURPeskrb3roFgCfQs0Y0TGUwXlqxpFlHFG6LKUc+V9j6mtWPbMlVa/6cYEC/Ltmp0XrKAsmndcAxHwKRlxSBmBTlD+bWubcK3u1StHS0J7aMm7LlDtgUsNw3t12IZwDWeqpwGuijxv1GbignJFMp5rTCih0uHDccyZF2BkYHtvUBWeLvNCeLNm8Rom8rsrJSzD0jZ62rElMyB9AqoqtCSN468bYd3SMVhjIeW+oLRXrR5HUl7/oPHkDAShj+o/4OPTPYVUoPzvfrYvEquxWVh8+T25q7cXx7AOb5Lw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eVZWmJyawWp7qJVj6fsL5exDFyKRe6K0X+0eHRtAuyU=; b=qIkWGxH7ccaeoZxYQ3jVzLGVx0mc2ptR9id8O58uwa9B9C0UMnI8p129b1/kG/afwyG0uOtj7EiO6vxCskvdsmKnFH1qxUizO8bT6H7in6/dr0edQo3e1RL2TNJpzKNgNKe3c4QoV8e4GZBEMm78NhmTnkbbNmSEjnDugjMDQfipkT5642pjOk9AYqvNrGSLs0cNrzu/Vwo6VoZwvzRM7SE0zE9PRtBRj6Vy3Nl5f03kYCP/Kj96wAMyyNt8gmuN3ETA3E5wo/3uqpDnmXm0GQbq8IIuB7vzo7uRQPgbP/NmOzuv6K563rC/o1PF3j0e/46awB8327Fb3j8nepU5vA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM4PR12MB5359.namprd12.prod.outlook.com (2603:10b6:5:39e::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6588.33; Wed, 19 Jul 2023 12:19:37 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74%7]) with mapi id 15.20.6588.031; Wed, 19 Jul 2023 12:19:37 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: ajd@linux.ibm.com, catalin.marinas@arm.com, fbarrat@linux.ibm.com, iommu@lists.linux.dev, jgg@ziepe.ca, jhubbard@nvidia.com, kevin.tian@intel.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au, nicolinc@nvidia.com, npiggin@gmail.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, x86@kernel.org, zhi.wang.linux@gmail.com, Alistair Popple Subject: [PATCH v2 4/5] mmu_notifiers: Don't invalidate secondary TLBs as part of mmu_notifier_invalidate_range_end() Date: Wed, 19 Jul 2023 22:18:45 +1000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY4P282CA0018.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:a0::28) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM4PR12MB5359:EE_ X-MS-Office365-Filtering-Correlation-Id: bf6b6453-5239-4870-0156-08db88526b72 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jFA/fkx5PNF1Hb9ye5JW1X7Cmnqk1Woo+XEeIjgNlxg5WpWaoWwiTQfm3tWWOme3ywog+eTLg9GwYQYUIaIVRMUWm8CG/95hxpdyPxmITguoOXq3SmJvZJddcmZoUAqiVTT0IXC5GKteNqCGzAeO4istOQoKsQCf+T9uA/wN+iwvVHRG6DZGR0X2XK8xjQ3ACRrNGBSOpvcEtrdfQ0kS9RqYUXIQSM1JWXeVPWeRk0ozAocb9hM12uE9Q+YftObMM10QFKs2dFx5AT7kZ7+tWdnzE59ESvWKiKB6sm2e4YX2nPhxFbQxWDH2d0EHxthm2CIAVTb0wp2hsdb/I+PXTRPG3eX+6gtMV2WgOs5GcJKRtp3NA6G9Jna3dxDz7Z3lOcrkyssQCQ4tXeTg0cS6YI0nnCuGxEclWpCwdOEFCcxOoeZqkCTyOKQ/ia+OCnCFrza5wl8ddvD1/LvNf5f9/tpljz14c38suZmGpi8LHBVVrhVpMHjFKJgZkCbhurYRiqCWnXiNQ4P5k5JVbuYpT/Axc5QAqO5bA8caGLeUrOE= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199021)(26005)(5660300002)(8936002)(478600001)(8676002)(6506007)(83380400001)(30864003)(2616005)(107886003)(2906002)(6512007)(38100700002)(316002)(186003)(7416002)(66946007)(66476007)(66556008)(6666004)(6916009)(4326008)(6486002)(86362001)(36756003)(41300700001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Ns9g3m9R3uGn+yRf5VO8lYyD2d7iEHdJr44Er6nShVW6bjeln71Ll7C1dXHlP4Pzsfp9R6ouSFnQf9iKNHRjMO4SGLMf4M+q5xELFhXDDzLWtcAFLa5Ddm7dQLaC9mH+PEdWtcXumGxIFR7dSDQMG4LwqE5ggsZdE4GPnfyBbFw+zAYsnvTKCK4fv4WnvVQUa3gEFLTnQhAL+g4D7qOtIG9r5FjimHWkT+NiB26Xgd2uyux44jscVXuUiG6oE2Uky5/7qW8s2B9TGn4FeR42Yzr4FqJRy/71GWFa8AQDsaSpgtBFSKAWZDx9YPHISUTaSGRvUNj8IcF5oOe++WBPsoIVy65jEIEKCAsKUYv6DVTbi+oHr7ewLzviaD1JEy4O7hwE53jlA0ljGwNYaLhJgQYd5Nat2KSi4cquiPPpaye9D0boBp3MkJAQxfjmgi3MdYM36F4VaesCiLonsxvkujKynjEphjWg1gThqqonqi9Li/dr6jfqcSHqFpZTqEqTDn0E+KejAD0iRz7G2l8ll/yTU6YvPd4Rn3QizIY6CrApbaTW+C9Ijh+oESBRBL85uGkFI3yvOn8Oh6Bqahln1zD6EE3I1yrnAlFu3TDu96Xwb9LCZDDEULUH3gAY7cwz4HazfqbQURoakSQfvAYCrCm7CNCMrnh/Zd2cB2b7wL30MAr8CeDaw85VkoIPqADdZYdx5/1H4awB6vccFB/LD3Q1kWgnqwF5s5PZCao1wbMAyYP3fF1qqW2yZj+54V9uirGhVEUo+SSnt7MQ7UrEVbv55tAuvjDjSIGsVji2Kb4BMXXMb1wNUVEDw/tS45kAwG6L702g74HzP6HN28/qwZHG22w9WN1g/8RvwBPaSTB8XiBOT1oIsNvbghS7ZSPnFgBRZhBIs0IiPIFPiJKsGvbqsFuh4v/uDZQWMk3BUnsG3iTGIECgf8QJE0AaXZzc7H/7hLpVrLiiH+dt+Qbjfr4394MFyo0oxCysyHT13it5WyyPAUoUuAeOCBZAfF2yloIR4uJAz3Qexrsybrj4w1NlcmbmnBcwwtQCNbON7bjhO1Im7SO20JeNm39VDPu4BeCO+b3ablEPE2AIpSiaOEPU7EHVALOhzbSTQne6CzoQdIHn9jCow6yPYSljeK+sBLT8AqCX82CcSsNN4PDU7OC9fitf1+E4wpmGuJCqauV6J6uc35lLlb5vDRB+yC9d8T5rUpWZzCOjjT9BeF61+Tz4+P15uqMhn1Jr/se+zULq5bCgG6wYZT8N06V0Kzlw/pBqWYzXh3L6/5+V0PsafO4k0wPFjfzhyDiOPP1vRK3IxXHi0WDm1PLgWmAjkqhZHFhrkdSfKFBH/XKbH1NSE3VSy54AVzGti0Ht5UE5rO+Q56B5sUAZmgMsMqD6NFtiQB5axaxZRmEdV6KBGlJbGUdfcnQ8H/GACSf9t+u02lllprONyB7+z/dh7eqkwJDrhyTjx5AtE1hgeezuN1BXqPhN1bsqTzQZuzioV5pygLt3rzbJtjldbt70dUBGytyScFnDHNLuEkPz0u7zngFOHxWIgABDAb0o7XiLc0ceubaN1HMohm7SATDNFjTAScPI X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: bf6b6453-5239-4870-0156-08db88526b72 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2023 12:19:37.7160 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: wtusOXNKQlyZVspOP3Ljov1fymKyOR+4fWJpJMwVSK+WV6KfcX+fmfdOSrl/LD3zQXQQpEywjsusUX597XcLtw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5359 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771852725105061289 X-GMAIL-MSGID: 1771852725105061289 Secondary TLBs are now invalidated from the architecture specific TLB invalidation functions. Therefore there is no need to explicitly notify or invalidate as part of the range end functions. This means we can remove mmu_notifier_invalidate_range_end_only() and some of the ptep_*_notify() functions. Signed-off-by: Alistair Popple --- include/linux/mmu_notifier.h | 56 +------------------------------------ kernel/events/uprobes.c | 2 +- mm/huge_memory.c | 25 ++--------------- mm/hugetlb.c | 1 +- mm/memory.c | 8 +---- mm/migrate_device.c | 9 +----- mm/mmu_notifier.c | 25 ++--------------- mm/rmap.c | 40 +-------------------------- 8 files changed, 14 insertions(+), 152 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 64a3e05..f2e9edc 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -395,8 +395,7 @@ extern int __mmu_notifier_test_young(struct mm_struct *mm, extern void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, pte_t pte); extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); -extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r, - bool only_end); +extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r); extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, unsigned long start, unsigned long end); extern bool @@ -481,14 +480,7 @@ mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) might_sleep(); if (mm_has_notifiers(range->mm)) - __mmu_notifier_invalidate_range_end(range, false); -} - -static inline void -mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) -{ - if (mm_has_notifiers(range->mm)) - __mmu_notifier_invalidate_range_end(range, true); + __mmu_notifier_invalidate_range_end(range); } static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, @@ -582,45 +574,6 @@ static inline void mmu_notifier_range_init_owner( __young; \ }) -#define ptep_clear_flush_notify(__vma, __address, __ptep) \ -({ \ - unsigned long ___addr = __address & PAGE_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pte_t ___pte; \ - \ - ___pte = ptep_clear_flush(__vma, __address, __ptep); \ - mmu_notifier_invalidate_range(___mm, ___addr, \ - ___addr + PAGE_SIZE); \ - \ - ___pte; \ -}) - -#define pmdp_huge_clear_flush_notify(__vma, __haddr, __pmd) \ -({ \ - unsigned long ___haddr = __haddr & HPAGE_PMD_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pmd_t ___pmd; \ - \ - ___pmd = pmdp_huge_clear_flush(__vma, __haddr, __pmd); \ - mmu_notifier_invalidate_range(___mm, ___haddr, \ - ___haddr + HPAGE_PMD_SIZE); \ - \ - ___pmd; \ -}) - -#define pudp_huge_clear_flush_notify(__vma, __haddr, __pud) \ -({ \ - unsigned long ___haddr = __haddr & HPAGE_PUD_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pud_t ___pud; \ - \ - ___pud = pudp_huge_clear_flush(__vma, __haddr, __pud); \ - mmu_notifier_invalidate_range(___mm, ___haddr, \ - ___haddr + HPAGE_PUD_SIZE); \ - \ - ___pud; \ -}) - /* * set_pte_at_notify() sets the pte _after_ running the notifier. * This is safe to start by updating the secondary MMUs, because the primary MMU @@ -711,11 +664,6 @@ void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { } -static inline void -mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) -{ -} - static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, unsigned long start, unsigned long end) { diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index f0ac5b8..3048589 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -193,7 +193,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, } flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte))); - ptep_clear_flush_notify(vma, addr, pvmw.pte); + ptep_clear_flush(vma, addr, pvmw.pte); if (new_page) set_pte_at_notify(mm, addr, pvmw.pte, mk_pte(new_page, vma->vm_page_prot)); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 762be2f..3ece117 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2003,7 +2003,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, count_vm_event(THP_SPLIT_PUD); - pudp_huge_clear_flush_notify(vma, haddr, pud); + pudp_huge_clear_flush(vma, haddr, pud); } void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, @@ -2023,11 +2023,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, out: spin_unlock(ptl); - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above pudp_huge_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ @@ -2094,7 +2090,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, count_vm_event(THP_SPLIT_PMD); if (!vma_is_anonymous(vma)) { - old_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); + old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd); /* * We are going to unmap this huge page. So * just go ahead and zap it @@ -2304,20 +2300,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, out: spin_unlock(ptl); - /* - * No need to double call mmu_notifier->invalidate_range() callback. - * They are 3 cases to consider inside __split_huge_pmd_locked(): - * 1) pmdp_huge_clear_flush_notify() call invalidate_range() obvious - * 2) __split_huge_zero_page_pmd() read only zero page and any write - * fault will trigger a flush_notify before pointing to a new page - * (it is fine if the secondary mmu keeps pointing to the old zero - * page in the meantime) - * 3) Split a huge pmd into pte pointing to the same page. No need - * to invalidate secondary tlb entry they are all still valid. - * any further changes to individual pte will notify. So no need - * to call mmu_notifier->invalidate_range() - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dc1ec19..9c6e431 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5715,7 +5715,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); - mmu_notifier_invalidate_range(mm, range.start, range.end); page_remove_rmap(&old_folio->page, vma, true); hugepage_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) diff --git a/mm/memory.c b/mm/memory.c index ad79039..8dca544 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3158,7 +3158,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * that left a window where the new PTE could be loaded into * some TLBs while the old PTE remains in others. */ - ptep_clear_flush_notify(vma, vmf->address, vmf->pte); + ptep_clear_flush(vma, vmf->address, vmf->pte); folio_add_new_anon_rmap(new_folio, vma, vmf->address); folio_add_lru_vma(new_folio, vma); /* @@ -3204,11 +3204,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); } - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above ptep_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); if (new_folio) folio_put(new_folio); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index e29626e..6c556b5 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -658,7 +658,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (flush) { flush_cache_page(vma, addr, pte_pfn(orig_pte)); - ptep_clear_flush_notify(vma, addr, ptep); + ptep_clear_flush(vma, addr, ptep); set_pte_at_notify(mm, addr, ptep, entry); update_mmu_cache(vma, addr, ptep); } else { @@ -763,13 +763,8 @@ static void __migrate_device_pages(unsigned long *src_pfns, src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; } - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above ptep_clear_flush_notify() inside migrate_vma_insert_page() - * did already call it. - */ if (notified) - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } /** diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index b7ad155..453a156 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -551,7 +551,7 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) static void mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, - struct mmu_notifier_range *range, bool only_end) + struct mmu_notifier_range *range) { struct mmu_notifier *subscription; int id; @@ -559,24 +559,6 @@ mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, id = srcu_read_lock(&srcu); hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, srcu_read_lock_held(&srcu)) { - /* - * Call invalidate_range here too to avoid the need for the - * subsystem of having to register an invalidate_range_end - * call-back when there is invalidate_range already. Usually a - * subsystem registers either invalidate_range_start()/end() or - * invalidate_range(), so this will be no additional overhead - * (besides the pointer check). - * - * We skip call to invalidate_range() if we know it is safe ie - * call site use mmu_notifier_invalidate_range_only_end() which - * is safe to do when we know that a call to invalidate_range() - * already happen under page table lock. - */ - if (!only_end && subscription->ops->invalidate_range) - subscription->ops->invalidate_range(subscription, - range->mm, - range->start, - range->end); if (subscription->ops->invalidate_range_end) { if (!mmu_notifier_range_blockable(range)) non_block_start(); @@ -589,8 +571,7 @@ mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, srcu_read_unlock(&srcu, id); } -void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, - bool only_end) +void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { struct mmu_notifier_subscriptions *subscriptions = range->mm->notifier_subscriptions; @@ -600,7 +581,7 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, mn_itree_inv_end(subscriptions); if (!hlist_empty(&subscriptions->list)) - mn_hlist_invalidate_end(subscriptions, range, only_end); + mn_hlist_invalidate_end(subscriptions, range); lock_map_release(&__mmu_notifier_invalidate_range_start_map); } diff --git a/mm/rmap.c b/mm/rmap.c index 1355bf6..51ec8aa 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -985,13 +985,6 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) #endif } - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/mm/mmu_notifier.rst - */ if (ret) cleaned++; } @@ -1549,8 +1542,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, - range.start, range.end); /* * The ref count of the PMD page was * dropped which is part of the way map @@ -1623,9 +1614,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * copied pages. */ dec_mm_counter(mm, mm_counter(&folio->page)); - /* We have to invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); } else if (folio_test_anon(folio)) { swp_entry_t entry = { .val = page_private(subpage) }; pte_t swp_pte; @@ -1637,9 +1625,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, folio_test_swapcache(folio))) { WARN_ON_ONCE(1); ret = false; - /* We have to invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); page_vma_mapped_walk_done(&pvmw); break; } @@ -1670,9 +1655,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ if (ref_count == 1 + map_count && !folio_test_dirty(folio)) { - /* Invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, - address, address + PAGE_SIZE); dec_mm_counter(mm, MM_ANONPAGES); goto discard; } @@ -1727,9 +1709,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); set_pte_at(mm, address, pvmw.pte, swp_pte); - /* Invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); } else { /* * This is a locked file-backed folio, @@ -1745,13 +1724,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, dec_mm_counter(mm, mm_counter_file(&folio->page)); } discard: - /* - * No need to call mmu_notifier_invalidate_range() it has be - * done above for all cases requiring it to happen under page - * table lock before mmu_notifier_invalidate_range_end() - * - * See Documentation/mm/mmu_notifier.rst - */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); @@ -1930,8 +1902,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, - range.start, range.end); /* * The ref count of the PMD page was @@ -2036,9 +2006,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * copied pages. */ dec_mm_counter(mm, mm_counter(&folio->page)); - /* We have to invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); } else { swp_entry_t entry; pte_t swp_pte; @@ -2102,13 +2069,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, */ } - /* - * No need to call mmu_notifier_invalidate_range() it has be - * done above for all cases requiring it to happen under page - * table lock before mmu_notifier_invalidate_range_end() - * - * See Documentation/mm/mmu_notifier.rst - */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); From patchwork Wed Jul 19 12:18:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 122596 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2408285vqt; Wed, 19 Jul 2023 05:41:40 -0700 (PDT) X-Google-Smtp-Source: APBJJlFTQp9SBwKtk/xUYsK1qH/Sw1J0sTrxmvv1+wJCGwUeJ2oXwZjEElkYazCx66zNsUk0mF4i X-Received: by 2002:a17:906:7491:b0:978:acec:36b1 with SMTP id e17-20020a170906749100b00978acec36b1mr2489047ejl.17.1689770499181; Wed, 19 Jul 2023 05:41:39 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1689770499; cv=pass; d=google.com; s=arc-20160816; b=tjN0nwPcfMkrwXT/Vpzvb3uA5LVPleKGUa+BjEw8oCFqdkGeR8bfpE1UMeorMHWmQG Ib/x4hRvcy6FaINyY4nLks3XDiBvgDJYapuAb60qSbWEz/4QAvDkA+7iMsavkJ5bDFId aaul4GyYzhERNx/hNreOUwJ/saLO+1bU+KVmPEWR40q1Vsa5taTUMlvRHnMten5pYsCv 8sC71GW988oJ/bPDdf+tvdX44dlFt5+e+WvA0UPz2UVo+kobpVwrDbmdTXjx0dmgS6cW ejzc46rWHOQxsV226+8Xtf2eZXIOJwotfLUg6KSt3wfRfDvI2rynChGtBgJ83xcPA1iP pMOw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=lbud4LSlK3yfCTb3jVXNEr8FB1vIhgQnDlLONht1yzY=; fh=pe/fa6+wUxGgRABLVahAk4PQ+aJXRRNM6HrdDd6v8Lk=; b=fb+B4h/SwcC9qs5tOuJtgWQ73Fz+UkvCna/nlLV1YAiybHSwChGvLURGyf2/+ceWFG p5tXsFZY3S+xC0lOJ2PBrL6D8yASTXJRXiQxVwuqlj/re5NsWYWv3h1THlXBhUPLUmAF Z3p4zEPTi5WPVtrYjkvd/8x0kuYMzZXypW2OQcmBzsv259qQOfHvrKcGIaCuyUJUMw5e mAZs7hNDeG7fMD5cY72Cl1EJaEECcnD9GJSMZUPHxxUptXfUh3BIELxuCF3/EWhQ0wVm UE/991RcIry55jXKKb54xxQzzO8uUMloRy9+txJjk7/f8MJf0EgIF0H8160F1ASUYmr8 7hZw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=hFW2GtK2; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k10-20020a1709065fca00b009932528281asi2811281ejv.579.2023.07.19.05.41.15; Wed, 19 Jul 2023 05:41:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=hFW2GtK2; arc=pass (i=1 spf=pass spfdomain=nvidia.com dkim=pass dkdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230310AbjGSMUr (ORCPT + 99 others); Wed, 19 Jul 2023 08:20:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230392AbjGSMUk (ORCPT ); Wed, 19 Jul 2023 08:20:40 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82D2D1705; Wed, 19 Jul 2023 05:20:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IL+EyisKPU2XEGp450TJCoiQnl51h4f+5ARx67Df6xPW8jeJcgvf/2jA0x21JqJ8ASUQyv06XrGv7KzC98gIAD603AeexeW6ZwBhktfTFtyLfhZtb8KKA3/OdTooOorACWCbtelS3Mk7ESfYTw5hRaWr5vFWr6PnH4uUdt7gLJXJPeG7KDrdNYlFYD/uWUoZEYJTVLZO9iKibfkk13ObFXswV8ARxuXohrdq6azFmKnB8kwohxkvMtAUrhTLrBdzmIxiJy3gfmSaOzzKqYDrIBymABGPDArUNwSRzb8F3lym4Rx3zDjWol7rDGxigL9h66Bq3ViyY5kgqPd0UXqAVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lbud4LSlK3yfCTb3jVXNEr8FB1vIhgQnDlLONht1yzY=; b=hN0VXTWSSK6f8h4kCpPs8jsVe9uwZtF86DeyUKXIrwbrGlra8NM1GBUWpXnjGB1B/lI9GITTTgst0mztrCbpv5ziupnkxdGU3jjXlUOhTHbPta3Evvd8/xqMIBBLDBfs/i1vaFsVd9H0UG39O2Ei8VcTTRn2f3fYWdnFQzhVjZcDkyXNDLqb2JIXk2gH/l8WVAZBvTSGGP+w18EZE7tO+/CNYWsR4ZYTwSNogqUYSRqlV7Wo19qIBmRb6eAJY7ojRlhfnpg/BjscnYWvObke1mmUEbIW3Cfi3fKmPu16w5LYZyga35t522M3MuQQ5/ab8SY2v3CuVZcw917q2YkmSA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lbud4LSlK3yfCTb3jVXNEr8FB1vIhgQnDlLONht1yzY=; b=hFW2GtK2ISijuMyHaioaSek3EoFQf8l+CacoM+eYoSIx75Hflh6qBVx6OgP5wH/9VHGniw40AhHSn/qT6q9Im9w/siY6Hk1unRoOVvfb1AnDLJxS5gERA4OVrUCQveyq6255bML7PE3UN2gOOGK3WRE0CJzZfJ2Gi/IdQgyZitHI1w/vumRADmvl/fOWRoBsDAbj0c3LyuoCivtfA8QRrNWLEmFBpo9EdyisAf5R1TjCwl6O+9aMhHiTHyCY/sudN66cYu2mNSk2KiUGbPXzoIuLUpiQyUc4r9Jz4v7xcPJrM0/12/eQJOsrx+rLsrvuSULgP4whEur8G1+ldD51BA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM4PR12MB5359.namprd12.prod.outlook.com (2603:10b6:5:39e::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6588.33; Wed, 19 Jul 2023 12:19:44 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74%7]) with mapi id 15.20.6588.031; Wed, 19 Jul 2023 12:19:44 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: ajd@linux.ibm.com, catalin.marinas@arm.com, fbarrat@linux.ibm.com, iommu@lists.linux.dev, jgg@ziepe.ca, jhubbard@nvidia.com, kevin.tian@intel.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au, nicolinc@nvidia.com, npiggin@gmail.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, x86@kernel.org, zhi.wang.linux@gmail.com, Alistair Popple , Jason Gunthorpe Subject: [PATCH v2 5/5] mmu_notifiers: Rename invalidate_range notifier Date: Wed, 19 Jul 2023 22:18:46 +1000 Message-Id: <9a02dde2f8ddaad2db31e54706a80c12d1817aaf.1689768831.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY4P282CA0011.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:a0::21) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM4PR12MB5359:EE_ X-MS-Office365-Filtering-Correlation-Id: 8d0f25be-1ae4-4b9f-226b-08db88526f54 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: U0DwmTa1mLjTNdKIGPSa5dYZIYxxO7uHJFmjs8xU27vi6fAQa4Md3uAiWCwPa9BIDnBXnAbd+Vvf7bWTEKulMDqw85D8nGTc8DZOyZch+HO4VJ2LiUeTZCI112CE5vteywdE/PIhmtWGqLPIpPK9KTRKC+8qf0PApL3+zYezJYPFLUKZy57NJN3X0CHTi/BpkIKQKM01OStp51dZeD1Ok4K6y8vJUuP8JRF5wtESmRy3YoCzQ0rr1BRoayZUqhOWEvm5zHbU/RPyiPs+hRetN5WLMO9K+82eAw4LrHgaHE6Ai5P0X7RSIAR0JzpqoslVmFGR0J73Ivpbq/sNVhp11LJado0YyFwGhOoILjwBq2Kr9LD7gL/sununM4HxhwffTMK7/vM/F3p+GBIXxvJ484d6WxaK+HacVWxizjoPXOlu8nzsXDXlBbpTxQ54AKyjTdOhKJwbW5HYDnbEvY/ywIug+2hOkwEkRxiAakyGXkTPvH2CSIdJ0X9XAhtDnm8FbRCakHhmy0BbGD7kfqaNo7ZfcLZnEvz4qOyG5hsXFBPXltNk1j/r7SH/lkIBeKtZ0nTQB5SLGxJWPdWIdBwSI7/85Mqg0tuZS7JcKsowrJU= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199021)(26005)(5660300002)(8936002)(478600001)(8676002)(6506007)(83380400001)(30864003)(2616005)(107886003)(2906002)(6512007)(38100700002)(316002)(186003)(7416002)(66946007)(66476007)(66556008)(6916009)(4326008)(6486002)(86362001)(36756003)(41300700001)(54906003)(473944003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: xVdMQQwH56BtHwdBQAwW/M19P864N0HLiGQN3TmF++VWgYr8uwtamQeUYAg+yNqs9X5wDhvx5EM7/jQrjn/YPX1v+h2eWUqlO56I8GaQZ8Z2K+H8p6DXesSYuv1WZpp3lwH2GbNolITCozslZRgYjz5xSBl6H2SdJ5y8RnwLZx61DkgqY8/n8xEXP0Zhxd6IcfRmvSPNbLLTs/JTJd6q19zmPKEPKyZeU36ZLR0O/GZ02Mu7sZCMZuo6Yf2KTSbUvvd+vxfId8LLP9WDuY7w2p3Pg7PBrqcpGVtjbpk5ZGL/jqdiGfJ/TUuau1ryNwzPLK0WIgVTN3SnIzALynm6iAE9/rzyirNbIPUpJM9ZqMV1zhlYhCV+v36JfQh+c8U76Ikd2QYBLLc4McIV/hAzrdSsfIYpCEKRWAMDxT4lp57yEasgjGyHozM0AdHnEr9Sy45JTMUrDfRT7GmY1PZnIZjqM/7Z47ycolLEKS5VyXVZFzs7AlF3iq4qiRTZWPMUaSRmy7kd6O3bbyaBNkLQWV3x/Ug+8jRzf/41EK2uU864a9XN1fPGVp6v8T2wUGYpA4xlc/0J6frhxvThdj/BvyKVtdplDgBqQEZNfj4r6Wz0sWLyF1iM7/+M5Q/vnQXLimRQziEDV1la7BNDIzoSXcpgkUNgMBUJbSvJaA3kbij/GMAFXli+O7onJSkwnYSAu6hM8dsA4J6FyOn6ZIOgoYh7LB9CFr5YODbcJpiN17xCEO8kLpQ/2on9t41qdgPGRIa0ElwsyZoHohMXjkyfKupiyosk6HWga09G4PANwNFbntAtq6rjBwi1vtTTMIXxPR2+RocpGQ2m6mNENZsVWUB3cQBSj8ekdHQCZ9n6oiAwwC4d0TKQkcxCShq1HB724MgNlc441kExLRIvGi0BsBgcMCSV2CZZxSYvKd12uTtMZ6DLquJIl4kgDnrGBReGA1SSPXOJIF7+CFS4bd5hyNBqAsDHltDRn8uQ/yak8ha1XydXfdQCAu1O8AxOLGYKb6Vw9RLIfdciK0l8paDVe6snsg/cUbuGR+/px0ipo+nb8JGHyIxG9eAJyJQdopG0pXUUVRm3Zc1+P/rkEJ82q2lqtgGdZ6fIxCFa39nyz+3G+9QcVDkFLGpxKWvc3bODEhXErzAHa7USMnchWl/TUxjbCPxzVJ18ejWBTmtdpgHyLRfs0gAb53c2gBs7e3TLQ4riXAgw3Jf+iUpDgAZjGxD+xDxVUkSHJAWfB10Ufc4Bm4brJgwKSd5icgbTJ0YnrFXoBvAXnyvJZkto9eEnQcKqZ5c3sMahAh02u1oB+LQV/OrmJ4sGhoWPhDxXRtXguTan2qJ0JC426z5VF+Larbv6zwMNZ28ISNItwVPOQ9sDsjv7/FKfdwklkeV9G3syPP+7VNgBNYdzeaAmyxqc4NyMJQXQ5VAs24pYzz8fxSzPX3wj0ZQI9sVE8XVSDXoungQpffGiGch7SzuT2KBgcqGe3getCtsnuPk95tzYEDtGoOw094tLFUsgmJeUl75Nkd7fRYYDZ2lT+iRmOueJVzoAgLePjB1HCNFR9vaXaNW+4EkPEHhMytppGQQdGtss X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8d0f25be-1ae4-4b9f-226b-08db88526f54 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2023 12:19:44.2253 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: u32lRe/7uiGKHmvSg054gKMKJ9fAKHGw/jUro8bl1irbdChFhUa2F3++PMgC7MYid9WOiu4Qs/R+/CtrEgem6g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5359 X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771852790977760212 X-GMAIL-MSGID: 1771852790977760212 There are two main use cases for mmu notifiers. One is by KVM which uses mmu_notifier_invalidate_range_start()/end() to manage a software TLB. The other is to manage hardware TLBs which need to use the invalidate_range() callback because HW can establish new TLB entries at any time. Hence using start/end() can lead to memory corruption as these callbacks happen too soon/late during page unmap. mmu notifier users should therefore either use the start()/end() callbacks or the invalidate_range() callbacks. To make this usage clearer rename the invalidate_range() callback to arch_invalidate_secondary_tlbs() and update documention. Signed-off-by: Alistair Popple Suggested-by: Jason Gunthorpe --- arch/arm64/include/asm/tlbflush.h | 6 +- arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 2 +- arch/powerpc/mm/book3s64/radix_tlb.c | 10 ++-- arch/x86/mm/tlb.c | 4 +- drivers/iommu/amd/iommu_v2.c | 10 ++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 13 ++--- drivers/iommu/intel/svm.c | 8 +-- drivers/misc/ocxl/link.c | 8 +-- include/linux/mmu_notifier.h | 48 +++++++++--------- mm/huge_memory.c | 4 +- mm/hugetlb.c | 7 +-- mm/mmu_notifier.c | 20 ++++++-- 12 files changed, 76 insertions(+), 64 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index a99349d..84a05a0 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -253,7 +253,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm) __tlbi(aside1is, asid); __tlbi_user(aside1is, asid); dsb(ish); - mmu_notifier_invalidate_range(mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } static inline void __flush_tlb_page_nosync(struct mm_struct *mm, @@ -265,7 +265,7 @@ static inline void __flush_tlb_page_nosync(struct mm_struct *mm, addr = __TLBI_VADDR(uaddr, ASID(mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); - mmu_notifier_invalidate_range(mm, uaddr & PAGE_MASK, + mmu_notifier_arch_invalidate_secondary_tlbs(mm, uaddr & PAGE_MASK, (uaddr & PAGE_MASK) + PAGE_SIZE); } @@ -400,7 +400,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, scale++; } dsb(ish); - mmu_notifier_invalidate_range(vma->vm_mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); } static inline void flush_tlb_range(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c index f3fb49f..17075c7 100644 --- a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c +++ b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c @@ -39,7 +39,7 @@ void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long st radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, end, psize); else radix__flush_tlb_range_psize(vma->vm_mm, start, end, psize); - mmu_notifier_invalidate_range(vma->vm_mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); } void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index 9724b26..64c11a4 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -752,7 +752,7 @@ void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmadd return radix__local_flush_hugetlb_page(vma, vmaddr); #endif radix__local_flush_tlb_page_psize(vma->vm_mm, vmaddr, mmu_virtual_psize); - mmu_notifier_invalidate_range(vma->vm_mm, vmaddr, + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, vmaddr, vmaddr + mmu_virtual_psize); } EXPORT_SYMBOL(radix__local_flush_tlb_page); @@ -989,7 +989,7 @@ void radix__flush_tlb_mm(struct mm_struct *mm) } } preempt_enable(); - mmu_notifier_invalidate_range(mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } EXPORT_SYMBOL(radix__flush_tlb_mm); @@ -1023,7 +1023,7 @@ static void __flush_all_mm(struct mm_struct *mm, bool fullmm) _tlbiel_pid_multicast(mm, pid, RIC_FLUSH_ALL); } preempt_enable(); - mmu_notifier_invalidate_range(mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } void radix__flush_all_mm(struct mm_struct *mm) @@ -1232,7 +1232,7 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm, } out: preempt_enable(); - mmu_notifier_invalidate_range(mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start, @@ -1397,7 +1397,7 @@ static void __radix__flush_tlb_range_psize(struct mm_struct *mm, } out: preempt_enable(); - mmu_notifier_invalidate_range(mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index c30fbcd..0b990fb 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1037,7 +1037,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, put_flush_tlb_info(); put_cpu(); - mmu_notifier_invalidate_range(mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } @@ -1265,7 +1265,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) put_flush_tlb_info(); put_cpu(); - mmu_notifier_invalidate_range(current->mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(current->mm, 0, -1UL); } /* diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index 261352a..2596466 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -355,9 +355,9 @@ static struct pasid_state *mn_to_state(struct mmu_notifier *mn) return container_of(mn, struct pasid_state, mn); } -static void mn_invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void mn_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, unsigned long end) { struct pasid_state *pasid_state; struct device_state *dev_state; @@ -391,8 +391,8 @@ static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm) } static const struct mmu_notifier_ops iommu_mn = { - .release = mn_release, - .invalidate_range = mn_invalidate_range, + .release = mn_release, + .arch_invalidate_secondary_tlbs = mn_arch_invalidate_secondary_tlbs, }; static void set_pri_tag_status(struct pasid_state *pasid_state, diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 2a19784..dbc812a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -186,9 +186,10 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd) } } -static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void arm_smmu_mm_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, + unsigned long end) { struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn); struct arm_smmu_domain *smmu_domain = smmu_mn->domain; @@ -247,9 +248,9 @@ static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn) } static const struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = { - .invalidate_range = arm_smmu_mm_invalidate_range, - .release = arm_smmu_mm_release, - .free_notifier = arm_smmu_mmu_notifier_free, + .arch_invalidate_secondary_tlbs = arm_smmu_mm_arch_invalidate_secondary_tlbs, + .release = arm_smmu_mm_release, + .free_notifier = arm_smmu_mmu_notifier_free, }; /* Allocate or get existing MMU notifier for this {domain, mm} pair */ diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index e95b339..8f6d680 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -219,9 +219,9 @@ static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address, } /* Pages have been freed at this point */ -static void intel_invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void intel_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, unsigned long end) { struct intel_svm *svm = container_of(mn, struct intel_svm, notifier); @@ -256,7 +256,7 @@ static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) static const struct mmu_notifier_ops intel_mmuops = { .release = intel_mm_release, - .invalidate_range = intel_invalidate_range, + .arch_invalidate_secondary_tlbs = intel_arch_invalidate_secondary_tlbs, }; static DEFINE_MUTEX(pasid_mutex); diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c index 4cf4c55..c06c699 100644 --- a/drivers/misc/ocxl/link.c +++ b/drivers/misc/ocxl/link.c @@ -491,9 +491,9 @@ void ocxl_link_release(struct pci_dev *dev, void *link_handle) } EXPORT_SYMBOL_GPL(ocxl_link_release); -static void invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, unsigned long end) { struct pe_data *pe_data = container_of(mn, struct pe_data, mmu_notifier); struct ocxl_link *link = pe_data->link; @@ -509,7 +509,7 @@ static void invalidate_range(struct mmu_notifier *mn, } static const struct mmu_notifier_ops ocxl_mmu_notifier_ops = { - .invalidate_range = invalidate_range, + .arch_invalidate_secondary_tlbs = arch_invalidate_secondary_tlbs, }; static u64 calculate_cfg_state(bool kernel) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index f2e9edc..6e3c857 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -187,27 +187,27 @@ struct mmu_notifier_ops { const struct mmu_notifier_range *range); /* - * invalidate_range() is either called between - * invalidate_range_start() and invalidate_range_end() when the - * VM has to free pages that where unmapped, but before the - * pages are actually freed, or outside of _start()/_end() when - * a (remote) TLB is necessary. + * arch_invalidate_secondary_tlbs() is used to manage a non-CPU TLB + * which shares page-tables with the CPU. The + * invalidate_range_start()/end() callbacks should not be implemented as + * invalidate_secondary_tlbs() already catches the points in time when + * an external TLB needs to be flushed. * - * If invalidate_range() is used to manage a non-CPU TLB with - * shared page-tables, it not necessary to implement the - * invalidate_range_start()/end() notifiers, as - * invalidate_range() already catches the points in time when an - * external TLB range needs to be flushed. For more in depth - * discussion on this see Documentation/mm/mmu_notifier.rst + * This requires arch_invalidate_secondary_tlbs() to be called while + * holding the ptl spin-lock and therefore this callback is not allowed + * to sleep. * - * Note that this function might be called with just a sub-range - * of what was passed to invalidate_range_start()/end(), if - * called between those functions. + * This is called by architecture code whenever invalidating a TLB + * entry. It is assumed that any secondary TLB has the same rules for + * when invalidations are required. If this is not the case architecture + * code will need to call this explicitly when required for secondary + * TLB invalidation. */ - void (*invalidate_range)(struct mmu_notifier *subscription, - struct mm_struct *mm, - unsigned long start, - unsigned long end); + void (*arch_invalidate_secondary_tlbs)( + struct mmu_notifier *subscription, + struct mm_struct *mm, + unsigned long start, + unsigned long end); /* * These callbacks are used with the get/put interface to manage the @@ -396,8 +396,8 @@ extern void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, pte_t pte); extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r); -extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, - unsigned long start, unsigned long end); +extern void __mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, + unsigned long start, unsigned long end); extern bool mmu_notifier_range_update_to_read_only(const struct mmu_notifier_range *range); @@ -483,11 +483,11 @@ mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) __mmu_notifier_invalidate_range_end(range); } -static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline void mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, + unsigned long start, unsigned long end) { if (mm_has_notifiers(mm)) - __mmu_notifier_invalidate_range(mm, start, end); + __mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } static inline void mmu_notifier_subscriptions_init(struct mm_struct *mm) @@ -664,7 +664,7 @@ void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { } -static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, +static inline void mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, unsigned long start, unsigned long end) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3ece117..e0420de 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2120,8 +2120,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, if (is_huge_zero_pmd(*pmd)) { /* * FIXME: Do we want to invalidate secondary mmu by calling - * mmu_notifier_invalidate_range() see comments below inside - * __split_huge_pmd() ? + * mmu_notifier_arch_invalidate_secondary_tlbs() see comments below + * inside __split_huge_pmd() ? * * We are going from a zero huge page write protected to zero * small page also write protected so it does not seems useful diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9c6e431..e0028cb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6676,8 +6676,9 @@ long hugetlb_change_protection(struct vm_area_struct *vma, else flush_hugetlb_tlb_range(vma, start, end); /* - * No need to call mmu_notifier_invalidate_range() we are downgrading - * page table protection not changing it to point to a new page. + * No need to call mmu_notifier_arch_invalidate_secondary_tlbs() we are + * downgrading page table protection not changing it to point to a new + * page. * * See Documentation/mm/mmu_notifier.rst */ @@ -7321,7 +7322,7 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma, i_mmap_unlock_write(vma->vm_file->f_mapping); hugetlb_vma_unlock_write(vma); /* - * No need to call mmu_notifier_invalidate_range(), see + * No need to call mmu_notifier_arch_invalidate_secondary_tlbs(), see * Documentation/mm/mmu_notifier.rst. */ mmu_notifier_invalidate_range_end(&range); diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 453a156..63c8eb7 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -585,8 +585,8 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) lock_map_release(&__mmu_notifier_invalidate_range_start_map); } -void __mmu_notifier_invalidate_range(struct mm_struct *mm, - unsigned long start, unsigned long end) +void __mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, + unsigned long start, unsigned long end) { struct mmu_notifier *subscription; int id; @@ -595,9 +595,10 @@ void __mmu_notifier_invalidate_range(struct mm_struct *mm, hlist_for_each_entry_rcu(subscription, &mm->notifier_subscriptions->list, hlist, srcu_read_lock_held(&srcu)) { - if (subscription->ops->invalidate_range) - subscription->ops->invalidate_range(subscription, mm, - start, end); + if (subscription->ops->arch_invalidate_secondary_tlbs) + subscription->ops->arch_invalidate_secondary_tlbs( + subscription, mm, + start, end); } srcu_read_unlock(&srcu, id); } @@ -616,6 +617,15 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, mmap_assert_write_locked(mm); BUG_ON(atomic_read(&mm->mm_users) <= 0); + /* + * Subsystems should only register for invalidate_secondary_tlbs() or + * invalidate_range_start()/end() callbacks, not both. + */ + if (WARN_ON_ONCE(subscription->ops->arch_invalidate_secondary_tlbs && + (subscription->ops->invalidate_range_start || + subscription->ops->invalidate_range_end))) + return -EINVAL; + if (!mm->notifier_subscriptions) { /* * kmalloc cannot be called under mm_take_all_locks(), but we