Message ID | 20221023025047.470646-1-mike.kravetz@oracle.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp1442252wrr; Sat, 22 Oct 2022 19:54:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5mZaRRQ+ANFhd+TdmxqJ7vXiqOYowBjYhh9/aYXze+8+fVmgW1LRfYVpWdDkaRAjy+SxvT X-Received: by 2002:aa7:da42:0:b0:461:9465:b019 with SMTP id w2-20020aa7da42000000b004619465b019mr2948099eds.144.1666493671496; Sat, 22 Oct 2022 19:54:31 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1666493671; cv=pass; d=google.com; s=arc-20160816; b=KP1nRw6gIaIHiQisPRtkQ1YbrTetOX6rzmSwRnTARi6twIy1CaHIfacO658qZP8As2 rtsgm9BMbsaLmQZ3jQm/7vAl3ymrAg1MYJZHnuzahpQ9cH/RntsO6Mg7gNx8s7DTb0Hv LQBgvpYOfvcn15fVglu7myZoPreY0bwKZXs5TsSti4Db76H6K0QmzQfyvv7LjUBz57DQ z9/usB2EMs4mXWAAKMDuJunIGgGu3dkhDvS9selZL3QiQ9ZMZ7tfdgd9voKKRVnuwv1P aw2dvbrhk+94v0DDAOej0nGTA5/pgpE0TPJ9ekfyRlMs6s/4OUNd3/CnfGZ19Hg8tkbB haew== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :message-id:date:subject:cc:to:from:dkim-signature:dkim-signature; bh=VQA2SOQAFtFMFnBZwPUrUgxV3MUHam15AWM8OzSqSDc=; b=E+ZtOIPiSiJzbrb5zFI/i/SdB3iOo9oy1J3LgKCbnuTYEf/oAOEG5mAPkhak+JuQ4C rXMg23/4hRfsRdxTYMXfOODpaqMjMJ6DQ93FliYxvwoFDFfaDFgA8frgHZrNXYKCP4uM 1qfOsSw1YCxfbxDGmW8aAsqVYOfbOXYo5PruAz5ZJNoRvOqoGisEW+0QIRN6xN/YHtom qwNCeiyGC15KPvzxYQOrUiai672IhPZpT2GdSs4XRX4MpiRyF4RTocDa6jmOEZaK1dvv GUlSU4kMZok++Q7CLmeKW9s6TNfaGcW2SuKmq44d1XtU74OX2g2xOtD8soUQohp3WeH2 6lTQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=Y358uiB2; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b="gEW7/qUg"; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cw10-20020a170906478a00b007897abccc6bsi29415180ejc.484.2022.10.22.19.54.07; Sat, 22 Oct 2022 19:54:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=Y358uiB2; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b="gEW7/qUg"; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229956AbiJWCxg (ORCPT <rfc822;pwkd43@gmail.com> + 99 others); Sat, 22 Oct 2022 22:53:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229948AbiJWCx2 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sat, 22 Oct 2022 22:53:28 -0400 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FAC16A50F; Sat, 22 Oct 2022 19:53:26 -0700 (PDT) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29MKxH8x030974; Sun, 23 Oct 2022 02:50:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : content-transfer-encoding : content-type : mime-version; s=corp-2022-7-12; bh=VQA2SOQAFtFMFnBZwPUrUgxV3MUHam15AWM8OzSqSDc=; b=Y358uiB2yQmluZqubTUCG9mQct7Bt5hF3FedgPU4pEoersSwgMQOxKyf5GXrx5YHpDLX JPdP/BlYTPGT7/u4v6VFk0CxQR3SXNpMcmU82NuacCPWEjCmcfI31GH0/mK9KfHtf1be D1RiMaLpGq+jeG40ZMRtyLU8FzTA81ch5AWEnUEGRSKkvMBVEsIHSkDDy3OP9p7Kirz2 z1huT0HybQwKC3fxMYnN7LnjhApggnYDxcFwdvhip360Qmh3oQ8Yq09hHYkVI0lVcJH2 Us+VlkRHbBp3iO2/Tb7jo7ZxpGDD4DP6oatglmAyxbmeAvdQkppFUBKaHr2lGUuxOz1A tw== Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kc741h7w1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 23 Oct 2022 02:50:56 +0000 Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 29MHwQ7F032346; Sun, 23 Oct 2022 02:50:56 GMT Received: from nam11-co1-obe.outbound.protection.outlook.com (mail-co1nam11lp2177.outbound.protection.outlook.com [104.47.56.177]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3kc6y305uh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 23 Oct 2022 02:50:55 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ScGqzPZWE36p0mLxW1JJxo2oXr2nbWCFhHP5jcfSZAsAJhFXUdlV/XhU6P+Jnb5fXAsaXS5bLKAC6DonSYYgGqj1RO/inGuxoumhsAadLCVojB0HIT8Z7tFwrfrhnzTWZu9IQAj9Pj7TUtkUPr4y+Xz9NP0FZRaqvcL8HXQgxojx7NnhNsh5lK1/Ec5mwX86X9fknhmJxrFGFrWrMIT9jYgSYs8rGLggxhk4t8x3Q6XrCqeKct7nhhQPBSPXTS4+o5UCJjna2wnao8bt76WvmC3cP7vvrYA7ZM7wo9WnpGhB6vQs0hM20dHsKFTZ5MLiWXWlKyw4cf/edTwXQu4oyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VQA2SOQAFtFMFnBZwPUrUgxV3MUHam15AWM8OzSqSDc=; b=PjaAeBIddbbwpuDZ7q1paEs3i67ZMnGTvwNKtCMRXKWfPYnzp3erGO7RwV38t5+2zb3bWiZ9HDGS1TGyDEFJnx0TnO5bTNGIeMCyRReqcG/xW1RbVHjurkOarQN2QbFVRCZzgMS0+kVAgDpgvB+uvRW4Algmwe8bHJxyo0ubYfbcKELhOiXbbsP3icHH0Gf0uBKiWca/s7e+qpYBwwb0dsePb0tEcjOn71lstgiL11/gwrgRdSX2G5ju0ffySTN4Ltmh4YeR/thr50IvEYFo8sJzKsUqWdB0g4PMcbE/7eBSuZTQOhpvBrlaIDcV+h6jJ962fSZ3MxsPiimSZN+kRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VQA2SOQAFtFMFnBZwPUrUgxV3MUHam15AWM8OzSqSDc=; b=gEW7/qUgpuJ5KQdacC+IqZ2wcsvPh6MZWELp1DMDOjKTQ0JUbErpC+OZRgVpPNC8Koh80m1yTSUnXA/GNBRStKrk+tIogwq1xbGVBoCwzAyFnvwgmhtzGnccyYjv1R7cLZnc7jdtyk3H8XXMBcJlI5mbuUgJagVijP27dGliYMU= Received: from BY5PR10MB4196.namprd10.prod.outlook.com (2603:10b6:a03:20d::23) by DS7PR10MB4960.namprd10.prod.outlook.com (2603:10b6:5:38c::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.21; Sun, 23 Oct 2022 02:50:53 +0000 Received: from BY5PR10MB4196.namprd10.prod.outlook.com ([fe80::5f85:c22e:b7fa:21bd]) by BY5PR10MB4196.namprd10.prod.outlook.com ([fe80::5f85:c22e:b7fa:21bd%5]) with mapi id 15.20.5746.021; Sun, 23 Oct 2022 02:50:53 +0000 From: Mike Kravetz <mike.kravetz@oracle.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>, David Hildenbrand <david@redhat.com>, Axel Rasmussen <axelrasmussen@google.com>, Mina Almasry <almasrymina@google.com>, Peter Xu <peterx@redhat.com>, Rik van Riel <riel@surriel.com>, Vlastimil Babka <vbabka@suse.cz>, Matthew Wilcox <willy@infradead.org>, Andrew Morton <akpm@linux-foundation.org>, Mike Kravetz <mike.kravetz@oracle.com>, Wei Chen <harperchen1110@gmail.com>, stable@vger.kernel.org Subject: [PATCH v2] hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing Date: Sat, 22 Oct 2022 19:50:47 -0700 Message-Id: <20221023025047.470646-1-mike.kravetz@oracle.com> X-Mailer: git-send-email 2.37.3 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: MW3PR06CA0020.namprd06.prod.outlook.com (2603:10b6:303:2a::25) To BY5PR10MB4196.namprd10.prod.outlook.com (2603:10b6:a03:20d::23) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BY5PR10MB4196:EE_|DS7PR10MB4960:EE_ X-MS-Office365-Filtering-Correlation-Id: 7382de3a-56b0-49df-4fac-08dab4a1665a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: txtaNrxCMPrenrLGmLjEXSr11slc45pbL33sLL6+EcKFFPRVEUjUnIRhT3Wu/yq8t+5qdz0v6OPTHVfunxucmDL4sJ7zhGMw65o6aZFPhjDrnKGUEM28xtE/iAGBXc3BjLEe2JM+IJdumZGi/fIHhj4HQ/eNlRs4DUcInln23h6JtU9reF2BzhzlRw+S5Ld9EVfHMMP9vwvcWJK/O7Rq+apbs79xn3J4CaaTFDu5ZGtyCTsKOIB+is19Mg43M6t5dM3iXI+rVmoT26S1I3rOcKXWn7NzDXmFMY52+LXDbxsnSF46fpr5GNLrnLbAjdbSzEd1ZGDSeuauPN/mLF5dV/Vqy+EoLEXnj+8DokS4gOcP77SFpxd1cXgAL28TQTd8nZ5oYrMFsZtehsmxfSWCDUw8N+xGVNHYnokik2sq/I1cTzF8uI8L66WPKjohq+bWe9eOe8xkQlon6TeGMCL/wonHJIM/weiA7vHZrxir0IrBbdair3nKYv6YwKTqwjLp3vrIN4z5uJnxPIEFLnUCC89L986xmzm/mB2GggwFfBb6l9ibn2AxHSM86JaACl1/MMNCVoqXU0v2zTU619EhU8srg2LPr0V3ljpl0iZ3rrsU1iUTAxppvll8l0vvewGb26OxEqsuJ4/QvuD04tpS8j4UBUFZi8dMWNYMgM2gyzmpGT6syt224NGBlT0T/OWhXtbtk8wX8kxaKxHeeLfDCebic6xf99c4pDhiCB3z+mA= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BY5PR10MB4196.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(346002)(376002)(39860400002)(136003)(451199015)(8936002)(478600001)(83380400001)(86362001)(38100700002)(41300700001)(4326008)(966005)(66556008)(6486002)(8676002)(5660300002)(7416002)(316002)(6506007)(66946007)(26005)(54906003)(6512007)(1076003)(2616005)(186003)(44832011)(6666004)(2906002)(66476007)(36756003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: xIupHzxbEU6pkWC5mzB59JL+yp43keoqcN/41v6PPkJuXDkYOaMIsV3rIxdWOC+8S38G7HwHacoooSLGDla+AqIH5EqDeKQ5nvJrjTncQfnOVBKdFAntfNa0INFnUitECtt9SKqIMvXo6QgFIfuQVPqpiFJlCminAg79xToQWzAei+Z4CNLYi9yCV9OdBTMDRrXz4i34bsvwAGg1cexBQ81j8z2Q9Qpw+8z6aGgZDy9M8vwMgSPwXCvAO4PmJEk7cThKOXL8pyA0VhNcv1qNfe9brV55a8qlBN48LRbM8EJyf0LoVDyiS21wcqbch2DfzYgJbRi82U2ga2MjVBOMCrDMCtQkQX2C26YdyKeme9N3kF8lREi+pCX8UWU7Wjl6eZGQQw5xdTrpkMOdLXqHk7dOTrPQsFms4lHbw9uQkoFKis3Tc/1PiSATYPjMUVcyf4CRtZX3FYcJej7oRC5kv7HcFz8yB9DaGukKMoLeRqjmCp3EqxB04mA0iV8HvBnde5TnVO10MZ7oXN87LeDPv/S78cAaD+omNbkQOI3x6V6SEsM0xebvQa5DlaNAFR+aqWFa2SrzxYGORop1ac4MBoiw2F/BcxlgfhtCOn2yq/gKplOKRH2pmkKaEMzxwKRjfm9p8zUx/dHVq17zQktDfoRjFqCyL6JHMNseV7ZCC0wP9xi3GNaiwl0UrKbAF9sj0GLbznL8KHiysr8Z54Mv4i5y4r72J+1spqJhAjRyedeAQRukmB0DexE6xbqOYuF/EZ6Riiing7PUm1wlLyWwrY8WgdJRfs0Gs5P7BEXOaaxTFEOnTt9+cLNsv1R4ZQkIFvls73Te0rkJs6XPZFhYu4s1jJu29KwC3+vqj1BSybD9fZJL4ILdyMIVvWIL2SG0JHTSGP4+e5eWJd49AS+juCkUmLPMOBv937x/kP3w0HsTko76y9Hho+ESn+Ph0uFY3re3czZpuNGG9lw4JbAETBmCN4C6IprujDBtQ6ECyVWTrF648wThLkbfPLEsndp2bYgMtjU3KoIco89eec0/j+eov/HMa3fJOimw0zbzdHeAJ31mE8DJJz+26D6Zhm4+l9VNEaH5yCPd1aftHm7Sf9gUpk1MTIicBbMjei9Pv1YeInJHv/Z6LyV7PRjvqxAgFtHLVjfqMf6H3QEAELj4xq3Kl+xbYMph4m3dseyHwjMGUPbVwJTQSs9/Z/CpKjqJbVMwlKPAD7134W72Yk+uUc/xueC4JtnW1/lqBI52XrysvYDY1kx3Pb+nF4wpMWswxg1324RUz2TXXHPE7ymjvDvykBeA8XTrJ6U2au9fc8IVHwJqT1+K/sUqve8eB4S2FZKwSI64Mp3OqfN5b0JOhgQaKULWFRLeLdT1HS0QywIUKDnjRmpmdRaDYWdmSSeve/qIF1lfw1s4z4BSPbQZDty+kNXt5kcW2mcfcjL6vhp19kYSMg5EGzlKP5DvSNOijvW0qjR/+5M1Q4CF2Dee0IVdYwvOYyVh6qeLaK132LxIbiJQxxTZSqhG5kpyiZsh1ofVNrH39/fJ3D3GvJ+7D61kDOKZF2sH66BI74tey4KgowPKd8PK8mtzrit5JOlSHYB+73cSQN+1DFhEaV7h6A== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7382de3a-56b0-49df-4fac-08dab4a1665a X-MS-Exchange-CrossTenant-AuthSource: BY5PR10MB4196.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Oct 2022 02:50:52.8618 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: W+d2Xjww9VPlAJ4FxAe8Mkw5p6DNcO6uvA6tiHNN8F5e7a14tNcbjSQXrxtcw3b2MatQNqmiXFLAefEiexVa7Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR10MB4960 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-21_04,2022-10-21_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 phishscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2210230016 X-Proofpoint-GUID: bHqztj2VfC0b6lL5Mdv8-gZndM-eG8pr X-Proofpoint-ORIG-GUID: bHqztj2VfC0b6lL5Mdv8-gZndM-eG8pr X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747341898148701574?= X-GMAIL-MSGID: =?utf-8?q?1747445268284641707?= |
Series |
[v2] hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing
|
|
Commit Message
Mike Kravetz
Oct. 23, 2022, 2:50 a.m. UTC
madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the
page tables associated with the address range. For hugetlb vmas,
zap_page_range will call __unmap_hugepage_range_final. However,
__unmap_hugepage_range_final assumes the passed vma is about to be
removed and deletes the vma_lock to prevent pmd sharing as the vma is
on the way out. In the case of madvise(MADV_DONTNEED) the vma remains,
but the missing vma_lock prevents pmd sharing and could potentially
lead to issues with truncation/fault races.
This issue was originally reported here [1] as a BUG triggered in
page_try_dup_anon_rmap. Prior to the introduction of the hugetlb
vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to
prevent pmd sharing. Subsequent faults on this vma were confused as
VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping
was not set in new pages added to the page table. This resulted in
pages that appeared anonymous in a VM_SHARED vma and triggered the BUG.
Create a new routine clear_hugetlb_page_range() that can be called from
madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as
zap_page_range, but does not delete the vma_lock.
[1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6JudD0g@mail.gmail.com/
Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings")
Reported-by: Wei Chen <harperchen1110@gmail.com>
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: <stable@vger.kernel.org>
---
v2 - Fix build issues associated with !CONFIG_ADVISE_SYSCALLS
include/linux/hugetlb.h | 7 +++++
mm/hugetlb.c | 62 +++++++++++++++++++++++++++++++++--------
mm/madvise.c | 5 +++-
3 files changed, 61 insertions(+), 13 deletions(-)
Comments
On 10/22/22 19:50, Mike Kravetz wrote: > madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the > page tables associated with the address range. For hugetlb vmas, > zap_page_range will call __unmap_hugepage_range_final. However, > __unmap_hugepage_range_final assumes the passed vma is about to be > removed and deletes the vma_lock to prevent pmd sharing as the vma is > on the way out. In the case of madvise(MADV_DONTNEED) the vma remains, > but the missing vma_lock prevents pmd sharing and could potentially > lead to issues with truncation/fault races. > > This issue was originally reported here [1] as a BUG triggered in > page_try_dup_anon_rmap. Prior to the introduction of the hugetlb > vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to > prevent pmd sharing. Subsequent faults on this vma were confused as > VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping > was not set in new pages added to the page table. This resulted in > pages that appeared anonymous in a VM_SHARED vma and triggered the BUG. > > Create a new routine clear_hugetlb_page_range() that can be called from > madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as > zap_page_range, but does not delete the vma_lock. After seeing a syzbot use after free report [2] that is also addressed by this patch, I started thinking ... When __unmap_hugepage_range_final was created, the only time unmap_single_vma was called for hugetlb vmas was during process exit time via exit_mmap. I got in trouble when I added a call via madvise(MADV_DONTNEED) which calls zap_page_range. This patch takes care of that calling path by having madvise(MADV_DONTNEED) call a new routine clear_hugetlb_page_range instead of zap_page_range for hugetlb vmas. The use after free bug had me auditing code paths to make sure __unmap_hugepage_range_final was REALLY only called at process exit time. If not, and we could fault on a vma after calling __unmap_hugepage_range_final we would be in trouble. My thought was, what if we had __unmap_hugepage_range_final check mm->mm_users to determine if it was being called in the process exit path? If !mm_users, then we can delete the vma_lock to prevent pmd sharing as we know the process is exiting. If not, we do not delete the lock. That seems to be more robust and would prevent issues if someone accidentally introduces a new code path where __unmap_hugepage_range_final (unmap_single_vma for a hugetlb vma) could be called outside process exit context. Thoughts? [2] https://lore.kernel.org/linux-mm/000000000000d5e00a05e834962e@google.com/
On 10/24/22 14:55, Mike Kravetz wrote: > On 10/22/22 19:50, Mike Kravetz wrote: > > madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the > > page tables associated with the address range. For hugetlb vmas, > > zap_page_range will call __unmap_hugepage_range_final. However, > > __unmap_hugepage_range_final assumes the passed vma is about to be > > removed and deletes the vma_lock to prevent pmd sharing as the vma is > > on the way out. In the case of madvise(MADV_DONTNEED) the vma remains, > > but the missing vma_lock prevents pmd sharing and could potentially > > lead to issues with truncation/fault races. > > > > This issue was originally reported here [1] as a BUG triggered in > > page_try_dup_anon_rmap. Prior to the introduction of the hugetlb > > vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to > > prevent pmd sharing. Subsequent faults on this vma were confused as > > VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping > > was not set in new pages added to the page table. This resulted in > > pages that appeared anonymous in a VM_SHARED vma and triggered the BUG. > > > > Create a new routine clear_hugetlb_page_range() that can be called from > > madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as > > zap_page_range, but does not delete the vma_lock. > > After seeing a syzbot use after free report [2] that is also addressed by > this patch, I started thinking ... > > When __unmap_hugepage_range_final was created, the only time unmap_single_vma > was called for hugetlb vmas was during process exit time via exit_mmap. I got > in trouble when I added a call via madvise(MADV_DONTNEED) which calls > zap_page_range. This patch takes care of that calling path by having > madvise(MADV_DONTNEED) call a new routine clear_hugetlb_page_range instead of > zap_page_range for hugetlb vmas. The use after free bug had me auditing code > paths to make sure __unmap_hugepage_range_final was REALLY only called at > process exit time. If not, and we could fault on a vma after calling > __unmap_hugepage_range_final we would be in trouble. > > My thought was, what if we had __unmap_hugepage_range_final check mm->mm_users > to determine if it was being called in the process exit path? If !mm_users, > then we can delete the vma_lock to prevent pmd sharing as we know the process > is exiting. If not, we do not delete the lock. That seems to be more robust > and would prevent issues if someone accidentally introduces a new code path > where __unmap_hugepage_range_final (unmap_single_vma for a hugetlb vma) > could be called outside process exit context. > > Thoughts? > > [2] https://lore.kernel.org/linux-mm/000000000000d5e00a05e834962e@google.com/ Sorry if this seems like I am talking to myself. Here is a proposed v3 as described above. From 1466fd43e180ede3f6479d1dca4e7f350f86f80b Mon Sep 17 00:00:00 2001 From: Mike Kravetz <mike.kravetz@oracle.com> Date: Mon, 24 Oct 2022 15:40:05 -0700 Subject: [PATCH v3] hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing When hugetlb madvise(MADV_DONTNEED) support was added, the existing code to call zap_page_range() to clear the page tables associated with the address range was not modified. However, for hugetlb vmas zap_page_range will call __unmap_hugepage_range_final. This routine assumes the passed hugetlb vma is about to be removed and deletes the vma_lock to prevent pmd sharing as the vma is on the way out. In the case of madvise(MADV_DONTNEED) the vma remains, but the missing vma_lock prevents pmd sharing and could potentially lead to issues with truncation/fault races. This issue was originally reported here [1] as a BUG triggered in page_try_dup_anon_rmap. Prior to the introduction of the hugetlb vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to prevent pmd sharing. Subsequent faults on this vma were confused as VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping was not set in new pages added to the page table. This resulted in pages that appeared anonymous in a VM_SHARED vma and triggered the BUG. __unmap_hugepage_range_final was originally designed only to be called in the context of process exit (exit_mmap). It is now called in the context of madvise(MADV_DONTNEED). Restructure the routine and check for !mm_users which indicates it is being called in the context of process exit. If being called in process exit context, delete the vma_lock. Otherwise, just unmap and leave the lock. Since the routine is called in more than just process exit context, rename to eliminate 'final' as __unmap_hugetlb_page_range. [1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6JudD0g@mail.gmail.com/ Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings") Reported-by: Wei Chen <harperchen1110@gmail.com> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: <stable@vger.kernel.org> --- v3 - Check for !mm_users in __unmap_hugepage_range_final instead of creating a separate function. include/linux/hugetlb.h | 4 ++-- mm/hugetlb.c | 30 ++++++++++++++++++++---------- mm/memory.c | 2 +- 3 files changed, 23 insertions(+), 13 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index a899bc76d677..bc19a1f6ca10 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -158,7 +158,7 @@ long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); -void __unmap_hugepage_range_final(struct mmu_gather *tlb, +void __unmap_hugetlb_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags); @@ -418,7 +418,7 @@ static inline unsigned long hugetlb_change_protection( return 0; } -static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb, +static inline void __unmap_hugetlb_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 931789a8f734..3fe1152c3c20 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5202,27 +5202,37 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct tlb_flush_mmu_tlbonly(tlb); } -void __unmap_hugepage_range_final(struct mmu_gather *tlb, +void __unmap_hugetlb_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) { + struct mm_struct *mm = vma->vm_mm; + hugetlb_vma_lock_write(vma); i_mmap_lock_write(vma->vm_file->f_mapping); __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); /* - * Unlock and free the vma lock before releasing i_mmap_rwsem. When - * the vma_lock is freed, this makes the vma ineligible for pmd - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. - * This is important as page tables for this unmapped range will - * be asynchrously deleted. If the page tables are shared, there - * will be issues when accessed by someone else. + * Free the vma_lock here if process exiting */ - __hugetlb_vma_unlock_write_free(vma); - - i_mmap_unlock_write(vma->vm_file->f_mapping); + if (!atomic_read(&mm->mm_users)) { + /* + * Unlock and free the vma lock before releasing i_mmap_rwsem. + * When the vma_lock is freed, this makes the vma ineligible + * for pmd sharing. And, i_mmap_rwsem is required to set up + * pmd sharing. This is important as page tables for this + * unmapped range will be asynchrously deleted. If the page + * tables are shared, there will be issues when accessed by + * someone else. + */ + __hugetlb_vma_unlock_write_free(vma); + i_mmap_unlock_write(vma->vm_file->f_mapping); + } else { + i_mmap_unlock_write(vma->vm_file->f_mapping); + hugetlb_vma_unlock_write(vma); + } } void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, diff --git a/mm/memory.c b/mm/memory.c index 8e72f703ed99..1de8ea504047 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1687,7 +1687,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, if (vma->vm_file) { zap_flags_t zap_flags = details ? details->zap_flags : 0; - __unmap_hugepage_range_final(tlb, vma, start, end, + __unmap_hugetlb_page_range(tlb, vma, start, end, NULL, zap_flags); } } else
Hi, Mike, On Sat, Oct 22, 2022 at 07:50:47PM -0700, Mike Kravetz wrote: [...] > -void __unmap_hugepage_range_final(struct mmu_gather *tlb, > +static void __unmap_hugepage_range_locking(struct mmu_gather *tlb, > struct vm_area_struct *vma, unsigned long start, > unsigned long end, struct page *ref_page, > - zap_flags_t zap_flags) > + zap_flags_t zap_flags, bool final) > { > hugetlb_vma_lock_write(vma); > i_mmap_lock_write(vma->vm_file->f_mapping); > > __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); > > - /* > - * Unlock and free the vma lock before releasing i_mmap_rwsem. When > - * the vma_lock is freed, this makes the vma ineligible for pmd > - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. > - * This is important as page tables for this unmapped range will > - * be asynchrously deleted. If the page tables are shared, there > - * will be issues when accessed by someone else. > - */ > - __hugetlb_vma_unlock_write_free(vma); > + if (final) { > + /* > + * Unlock and free the vma lock before releasing i_mmap_rwsem. > + * When the vma_lock is freed, this makes the vma ineligible > + * for pmd sharing. And, i_mmap_rwsem is required to set up > + * pmd sharing. This is important as page tables for this > + * unmapped range will be asynchrously deleted. If the page > + * tables are shared, there will be issues when accessed by > + * someone else. > + */ > + __hugetlb_vma_unlock_write_free(vma); > + i_mmap_unlock_write(vma->vm_file->f_mapping); Pure question: can we rely on hugetlb_vm_op_close() to destroy the hugetlb vma lock? I read the comment above, it seems we are trying to avoid racing with pmd sharing, but I don't see how that could ever happen, since iiuc there should only be two places that unmaps the vma (final==true): (1) munmap: we're holding write lock, so no page fault possible (2) exit_mmap: we've already reset current->mm so no page fault possible > + } else { > + i_mmap_unlock_write(vma->vm_file->f_mapping); > + hugetlb_vma_unlock_write(vma); > + } > +} > > - i_mmap_unlock_write(vma->vm_file->f_mapping); > +void __unmap_hugepage_range_final(struct mmu_gather *tlb, > + struct vm_area_struct *vma, unsigned long start, > + unsigned long end, struct page *ref_page, > + zap_flags_t zap_flags) > +{ > + __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page, > + zap_flags, true); > } > > +#ifdef CONFIG_ADVISE_SYSCALLS > +/* > + * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call > + * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete > + * the associated vma_lock. > + */ > +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, > + unsigned long end) > +{ > + struct mmu_notifier_range range; > + struct mmu_gather tlb; > + > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > + start, end); Is mmu_notifier_invalidate_range_start() missing here? > + tlb_gather_mmu(&tlb, vma->vm_mm); > + update_hiwater_rss(vma->vm_mm); > + > + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0, false); > + > + mmu_notifier_invalidate_range_end(&range); > + tlb_finish_mmu(&tlb); > +} > +#endif > + > void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, > unsigned long end, struct page *ref_page, > zap_flags_t zap_flags) > diff --git a/mm/madvise.c b/mm/madvise.c > index 2baa93ca2310..90577a669635 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > unsigned long start, unsigned long end) > { > - zap_page_range(vma, start, end - start); > + if (!is_vm_hugetlb_page(vma)) > + zap_page_range(vma, start, end - start); > + else > + clear_hugetlb_page_range(vma, start, end); > return 0; > } This does look a bit unfortunate - zap_page_range() contains yet another is_vm_hugetlb_page() check (further down in unmap_single_vma), it can be very confusing on which code path is really handling hugetlb. The other mm_users check in v3 doesn't need this change, but was a bit hackish to me, because IIUC we're clear on the call paths to trigger this (unmap_vmas), so it seems clean to me to pass that info from the upper stack. Maybe we can have a new zap_flags passed into unmap_single_vma() showing that it's destroying the vma? Thanks,
On 10/26/22 17:42, Peter Xu wrote: > Hi, Mike, > > On Sat, Oct 22, 2022 at 07:50:47PM -0700, Mike Kravetz wrote: > > [...] > > > -void __unmap_hugepage_range_final(struct mmu_gather *tlb, > > +static void __unmap_hugepage_range_locking(struct mmu_gather *tlb, > > struct vm_area_struct *vma, unsigned long start, > > unsigned long end, struct page *ref_page, > > - zap_flags_t zap_flags) > > + zap_flags_t zap_flags, bool final) > > { > > hugetlb_vma_lock_write(vma); > > i_mmap_lock_write(vma->vm_file->f_mapping); > > > > __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); > > > > - /* > > - * Unlock and free the vma lock before releasing i_mmap_rwsem. When > > - * the vma_lock is freed, this makes the vma ineligible for pmd > > - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. > > - * This is important as page tables for this unmapped range will > > - * be asynchrously deleted. If the page tables are shared, there > > - * will be issues when accessed by someone else. > > - */ > > - __hugetlb_vma_unlock_write_free(vma); > > + if (final) { > > + /* > > + * Unlock and free the vma lock before releasing i_mmap_rwsem. > > + * When the vma_lock is freed, this makes the vma ineligible > > + * for pmd sharing. And, i_mmap_rwsem is required to set up > > + * pmd sharing. This is important as page tables for this > > + * unmapped range will be asynchrously deleted. If the page > > + * tables are shared, there will be issues when accessed by > > + * someone else. > > + */ > > + __hugetlb_vma_unlock_write_free(vma); > > + i_mmap_unlock_write(vma->vm_file->f_mapping); > > Pure question: can we rely on hugetlb_vm_op_close() to destroy the hugetlb > vma lock? > > I read the comment above, it seems we are trying to avoid racing with pmd > sharing, but I don't see how that could ever happen, since iiuc there > should only be two places that unmaps the vma (final==true): > > (1) munmap: we're holding write lock, so no page fault possible > (2) exit_mmap: we've already reset current->mm so no page fault possible > Thanks for taking a look Peter! The possible sharing we are trying to stop would be initiated by a fault in a different process on the same underlying mapping object (inode). The specific vma in exit processing is still linked into the mapping interval tree. So, even though we call huge_pmd_unshare in the unmap processing (in __unmap_hugepage_range) the sharing could later be initiated by another process. Hope that makes sense. That is also the reason the routine page_table_shareable contains this check: /* * match the virtual addresses, permission and the alignment of the * page table page. * * Also, vma_lock (vm_private_data) is required for sharing. */ if (pmd_index(addr) != pmd_index(saddr) || vm_flags != svm_flags || !range_in_vma(svma, sbase, s_end) || !svma->vm_private_data) return 0; FYI - The 'flags' check also prevents a non-uffd mapping from initiating sharing with a uffd mapping. > > + } else { > > + i_mmap_unlock_write(vma->vm_file->f_mapping); > > + hugetlb_vma_unlock_write(vma); > > + } > > +} > > > > - i_mmap_unlock_write(vma->vm_file->f_mapping); > > +void __unmap_hugepage_range_final(struct mmu_gather *tlb, > > + struct vm_area_struct *vma, unsigned long start, > > + unsigned long end, struct page *ref_page, > > + zap_flags_t zap_flags) > > +{ > > + __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page, > > + zap_flags, true); > > } > > > > +#ifdef CONFIG_ADVISE_SYSCALLS > > +/* > > + * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call > > + * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete > > + * the associated vma_lock. > > + */ > > +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, > > + unsigned long end) > > +{ > > + struct mmu_notifier_range range; > > + struct mmu_gather tlb; > > + > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > > + start, end); > > Is mmu_notifier_invalidate_range_start() missing here? > It certainly does look like it. When I created this routine, I was trying to mimic what was done in the current calling path zap_page_range to __unmap_hugepage_range_final. Now when I look at that, I am not seeing a mmu_notifier_invalidate_range_start/end. Am I missing something, or are these missing today? Do note that we do MMU_NOTIFY_UNMAP in __unmap_hugepage_range. > > + tlb_gather_mmu(&tlb, vma->vm_mm); > > + update_hiwater_rss(vma->vm_mm); > > + > > + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0, false); > > + > > + mmu_notifier_invalidate_range_end(&range); > > + tlb_finish_mmu(&tlb); > > +} > > +#endif > > + > > void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, > > unsigned long end, struct page *ref_page, > > zap_flags_t zap_flags) > > diff --git a/mm/madvise.c b/mm/madvise.c > > index 2baa93ca2310..90577a669635 100644 > > --- a/mm/madvise.c > > +++ b/mm/madvise.c > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > > unsigned long start, unsigned long end) > > { > > - zap_page_range(vma, start, end - start); > > + if (!is_vm_hugetlb_page(vma)) > > + zap_page_range(vma, start, end - start); > > + else > > + clear_hugetlb_page_range(vma, start, end); > > return 0; > > } > > This does look a bit unfortunate - zap_page_range() contains yet another > is_vm_hugetlb_page() check (further down in unmap_single_vma), it can be > very confusing on which code path is really handling hugetlb. > > The other mm_users check in v3 doesn't need this change, but was a bit > hackish to me, because IIUC we're clear on the call paths to trigger this > (unmap_vmas), so it seems clean to me to pass that info from the upper > stack. > > Maybe we can have a new zap_flags passed into unmap_single_vma() showing > that it's destroying the vma? I thought about that. However, we would need to start passing the flag here into zap_page_range as this is the beginning of that call down into the hugetlb code where we do not want to remove zap_page_rangethe vma_lock.
On Wed, Oct 26, 2022 at 04:54:01PM -0700, Mike Kravetz wrote: > On 10/26/22 17:42, Peter Xu wrote: > > Hi, Mike, > > > > On Sat, Oct 22, 2022 at 07:50:47PM -0700, Mike Kravetz wrote: > > > > [...] > > > > > -void __unmap_hugepage_range_final(struct mmu_gather *tlb, > > > +static void __unmap_hugepage_range_locking(struct mmu_gather *tlb, > > > struct vm_area_struct *vma, unsigned long start, > > > unsigned long end, struct page *ref_page, > > > - zap_flags_t zap_flags) > > > + zap_flags_t zap_flags, bool final) > > > { > > > hugetlb_vma_lock_write(vma); > > > i_mmap_lock_write(vma->vm_file->f_mapping); > > > > > > __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); > > > > > > - /* > > > - * Unlock and free the vma lock before releasing i_mmap_rwsem. When > > > - * the vma_lock is freed, this makes the vma ineligible for pmd > > > - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. > > > - * This is important as page tables for this unmapped range will > > > - * be asynchrously deleted. If the page tables are shared, there > > > - * will be issues when accessed by someone else. > > > - */ > > > - __hugetlb_vma_unlock_write_free(vma); > > > + if (final) { > > > + /* > > > + * Unlock and free the vma lock before releasing i_mmap_rwsem. > > > + * When the vma_lock is freed, this makes the vma ineligible > > > + * for pmd sharing. And, i_mmap_rwsem is required to set up > > > + * pmd sharing. This is important as page tables for this > > > + * unmapped range will be asynchrously deleted. If the page > > > + * tables are shared, there will be issues when accessed by > > > + * someone else. > > > + */ > > > + __hugetlb_vma_unlock_write_free(vma); > > > + i_mmap_unlock_write(vma->vm_file->f_mapping); > > > > Pure question: can we rely on hugetlb_vm_op_close() to destroy the hugetlb > > vma lock? > > > > I read the comment above, it seems we are trying to avoid racing with pmd > > sharing, but I don't see how that could ever happen, since iiuc there > > should only be two places that unmaps the vma (final==true): > > > > (1) munmap: we're holding write lock, so no page fault possible > > (2) exit_mmap: we've already reset current->mm so no page fault possible > > > > Thanks for taking a look Peter! > > The possible sharing we are trying to stop would be initiated by a fault > in a different process on the same underlying mapping object (inode). The > specific vma in exit processing is still linked into the mapping interval > tree. So, even though we call huge_pmd_unshare in the unmap processing (in > __unmap_hugepage_range) the sharing could later be initiated by another > process. > > Hope that makes sense. That is also the reason the routine > page_table_shareable contains this check: > > /* > * match the virtual addresses, permission and the alignment of the > * page table page. > * > * Also, vma_lock (vm_private_data) is required for sharing. > */ > if (pmd_index(addr) != pmd_index(saddr) || > vm_flags != svm_flags || > !range_in_vma(svma, sbase, s_end) || > !svma->vm_private_data) > return 0; Ah, makes sense. Hmm, then I'm wondering whether hugetlb_vma_lock_free() would ever be useful at all? Because remove_vma() (or say, the close() hook) seems to always be called after an precedent unmap_vmas(). > > FYI - The 'flags' check also prevents a non-uffd mapping from initiating > sharing with a uffd mapping. > > > > + } else { > > > + i_mmap_unlock_write(vma->vm_file->f_mapping); > > > + hugetlb_vma_unlock_write(vma); > > > + } > > > +} > > > > > > - i_mmap_unlock_write(vma->vm_file->f_mapping); > > > +void __unmap_hugepage_range_final(struct mmu_gather *tlb, > > > + struct vm_area_struct *vma, unsigned long start, > > > + unsigned long end, struct page *ref_page, > > > + zap_flags_t zap_flags) > > > +{ > > > + __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page, > > > + zap_flags, true); > > > } > > > > > > +#ifdef CONFIG_ADVISE_SYSCALLS > > > +/* > > > + * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call > > > + * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete > > > + * the associated vma_lock. > > > + */ > > > +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, > > > + unsigned long end) > > > +{ > > > + struct mmu_notifier_range range; > > > + struct mmu_gather tlb; > > > + > > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > > > + start, end); > > > > Is mmu_notifier_invalidate_range_start() missing here? > > > > It certainly does look like it. When I created this routine, I was trying to > mimic what was done in the current calling path zap_page_range to > __unmap_hugepage_range_final. Now when I look at that, I am not seeing > a mmu_notifier_invalidate_range_start/end. Am I missing something, or > are these missing today? I'm not sure whether we're looking at the same code base; here it's in zap_page_range() itself. mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, start, start + size); tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); do { unmap_single_vma(&tlb, vma, start, range.end, NULL); } while ((vma = mas_find(&mas, end - 1)) != NULL); mmu_notifier_invalidate_range_end(&range); > Do note that we do MMU_NOTIFY_UNMAP in __unmap_hugepage_range. Hmm, I think we may want CLEAR for zap-only and UNMAP only for unmap. * @MMU_NOTIFY_UNMAP: either munmap() that unmap the range or a mremap() that * move the range * @MMU_NOTIFY_CLEAR: clear page table entry (many reasons for this like * madvise() or replacing a page by another one, ...). The other thing is that unmap_vmas() also notifies (same to zap_page_range), it looks a duplicated notification if any of them calls __unmap_hugepage_range() at last. > > > > + tlb_gather_mmu(&tlb, vma->vm_mm); > > > + update_hiwater_rss(vma->vm_mm); > > > + > > > + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0, false); > > > + > > > + mmu_notifier_invalidate_range_end(&range); > > > + tlb_finish_mmu(&tlb); > > > +} > > > +#endif > > > + > > > void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, > > > unsigned long end, struct page *ref_page, > > > zap_flags_t zap_flags) > > > diff --git a/mm/madvise.c b/mm/madvise.c > > > index 2baa93ca2310..90577a669635 100644 > > > --- a/mm/madvise.c > > > +++ b/mm/madvise.c > > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > > > unsigned long start, unsigned long end) > > > { > > > - zap_page_range(vma, start, end - start); > > > + if (!is_vm_hugetlb_page(vma)) > > > + zap_page_range(vma, start, end - start); > > > + else > > > + clear_hugetlb_page_range(vma, start, end); > > > return 0; > > > } > > > > This does look a bit unfortunate - zap_page_range() contains yet another > > is_vm_hugetlb_page() check (further down in unmap_single_vma), it can be > > very confusing on which code path is really handling hugetlb. > > > > The other mm_users check in v3 doesn't need this change, but was a bit > > hackish to me, because IIUC we're clear on the call paths to trigger this > > (unmap_vmas), so it seems clean to me to pass that info from the upper > > stack. > > > > Maybe we can have a new zap_flags passed into unmap_single_vma() showing > > that it's destroying the vma? > > I thought about that. However, we would need to start passing the flag > here into zap_page_range as this is the beginning of that call down into > the hugetlb code where we do not want to remove zap_page_rangethe > vma_lock. Right. I was thinking just attach the new flag in unmap_vmas(). A pesudo (not compiled) code attached. Thanks,
On 10/26/22 21:12, Peter Xu wrote: > On Wed, Oct 26, 2022 at 04:54:01PM -0700, Mike Kravetz wrote: > > On 10/26/22 17:42, Peter Xu wrote: > > > > > > Pure question: can we rely on hugetlb_vm_op_close() to destroy the hugetlb > > > vma lock? > > > > > > I read the comment above, it seems we are trying to avoid racing with pmd > > > sharing, but I don't see how that could ever happen, since iiuc there > > > should only be two places that unmaps the vma (final==true): > > > > > > (1) munmap: we're holding write lock, so no page fault possible > > > (2) exit_mmap: we've already reset current->mm so no page fault possible > > > > > > > Thanks for taking a look Peter! > > > > The possible sharing we are trying to stop would be initiated by a fault > > in a different process on the same underlying mapping object (inode). The > > specific vma in exit processing is still linked into the mapping interval > > tree. So, even though we call huge_pmd_unshare in the unmap processing (in > > __unmap_hugepage_range) the sharing could later be initiated by another > > process. > > > > Hope that makes sense. That is also the reason the routine > > page_table_shareable contains this check: > > > > /* > > * match the virtual addresses, permission and the alignment of the > > * page table page. > > * > > * Also, vma_lock (vm_private_data) is required for sharing. > > */ > > if (pmd_index(addr) != pmd_index(saddr) || > > vm_flags != svm_flags || > > !range_in_vma(svma, sbase, s_end) || > > !svma->vm_private_data) > > return 0; > > Ah, makes sense. Hmm, then I'm wondering whether hugetlb_vma_lock_free() > would ever be useful at all? Because remove_vma() (or say, the close() > hook) seems to always be called after an precedent unmap_vmas(). You are right. hugetlb_vma_lock_free will almost always be a noop when called from the close hook. It is still 'needed' for vms setup error pathss. > > > > +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, > > > > + unsigned long end) > > > > +{ > > > > + struct mmu_notifier_range range; > > > > + struct mmu_gather tlb; > > > > + > > > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > > > > + start, end); > > > > > > Is mmu_notifier_invalidate_range_start() missing here? > > > > > > > It certainly does look like it. When I created this routine, I was trying to > > mimic what was done in the current calling path zap_page_range to > > __unmap_hugepage_range_final. Now when I look at that, I am not seeing > > a mmu_notifier_invalidate_range_start/end. Am I missing something, or > > are these missing today? > > I'm not sure whether we're looking at the same code base; here it's in > zap_page_range() itself. > > mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > start, start + size); > tlb_gather_mmu(&tlb, vma->vm_mm); > update_hiwater_rss(vma->vm_mm); > mmu_notifier_invalidate_range_start(&range); > do { > unmap_single_vma(&tlb, vma, start, range.end, NULL); > } while ((vma = mas_find(&mas, end - 1)) != NULL); > mmu_notifier_invalidate_range_end(&range); Yes, I missed that. Thanks! > > > Do note that we do MMU_NOTIFY_UNMAP in __unmap_hugepage_range. > > Hmm, I think we may want CLEAR for zap-only and UNMAP only for unmap. > > * @MMU_NOTIFY_UNMAP: either munmap() that unmap the range or a mremap() that > * move the range > * @MMU_NOTIFY_CLEAR: clear page table entry (many reasons for this like > * madvise() or replacing a page by another one, ...). > > The other thing is that unmap_vmas() also notifies (same to > zap_page_range), it looks a duplicated notification if any of them calls > __unmap_hugepage_range() at last. The only call into __unmap_hugepage_range() from generic zap/unmap calls is via __unmap_hugepage_range_final. Other call paths are entirely within hugetlb code. > > > > + tlb_gather_mmu(&tlb, vma->vm_mm); > > > > + update_hiwater_rss(vma->vm_mm); > > > > + > > > > + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0, false); > > > > + > > > > + mmu_notifier_invalidate_range_end(&range); > > > > + tlb_finish_mmu(&tlb); > > > > +} > > > > +#endif > > > > + > > > > void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, > > > > unsigned long end, struct page *ref_page, > > > > zap_flags_t zap_flags) > > > > diff --git a/mm/madvise.c b/mm/madvise.c > > > > index 2baa93ca2310..90577a669635 100644 > > > > --- a/mm/madvise.c > > > > +++ b/mm/madvise.c > > > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > > > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > > > > unsigned long start, unsigned long end) > > > > { > > > > - zap_page_range(vma, start, end - start); > > > > + if (!is_vm_hugetlb_page(vma)) > > > > + zap_page_range(vma, start, end - start); > > > > + else > > > > + clear_hugetlb_page_range(vma, start, end); > > > > return 0; > > > > } > > > > > > This does look a bit unfortunate - zap_page_range() contains yet another > > > is_vm_hugetlb_page() check (further down in unmap_single_vma), it can be > > > very confusing on which code path is really handling hugetlb. > > > > > > The other mm_users check in v3 doesn't need this change, but was a bit > > > hackish to me, because IIUC we're clear on the call paths to trigger this > > > (unmap_vmas), so it seems clean to me to pass that info from the upper > > > stack. > > > > > > Maybe we can have a new zap_flags passed into unmap_single_vma() showing > > > that it's destroying the vma? > > > > I thought about that. However, we would need to start passing the flag > > here into zap_page_range as this is the beginning of that call down into > > the hugetlb code where we do not want to remove zap_page_rangethe > > vma_lock. > > Right. I was thinking just attach the new flag in unmap_vmas(). A pesudo > (not compiled) code attached. I took your suggestions and came up with a new version of this patch. Not sure if I love the new zap flag, as it is only used by hugetlb code. I also added a bool to __unmap_hugepage_range to eliminate the duplicate notification calls. From 15ffe922b60af9f4c19927d5d5aaca75840d0f6c Mon Sep 17 00:00:00 2001 From: Mike Kravetz <mike.kravetz@oracle.com> Date: Fri, 28 Oct 2022 07:46:50 -0700 Subject: [PATCH v5] hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the page tables associated with the address range. For hugetlb vmas, zap_page_range will call __unmap_hugepage_range_final. However, __unmap_hugepage_range_final assumes the passed vma is about to be removed and deletes the vma_lock to prevent pmd sharing as the vma is on the way out. In the case of madvise(MADV_DONTNEED) the vma remains, but the missing vma_lock prevents pmd sharing and could potentially lead to issues with truncation/fault races. This issue was originally reported here [1] as a BUG triggered in page_try_dup_anon_rmap. Prior to the introduction of the hugetlb vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to prevent pmd sharing. Subsequent faults on this vma were confused as VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping was not set in new pages added to the page table. This resulted in pages that appeared anonymous in a VM_SHARED vma and triggered the BUG. Create a new routine clear_hugetlb_page_range() that can be called from madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as zap_page_range, but does not delete the vma_lock. Also, add a new zap flag ZAP_FLAG_UNMAP to indicate an unmap call from unmap_vmas(). This is used to indicate the 'final' unmapping of a vma. The routine __unmap_hugepage_range to take a notification_needed argument. This is used to prevent duplicate notifications. [1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6JudD0g@mail.gmail.com/ Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reported-by: Wei Chen <harperchen1110@gmail.com> Cc: <stable@vger.kernel.org> --- include/linux/hugetlb.h | 7 ++++ include/linux/mm.h | 3 ++ mm/hugetlb.c | 93 +++++++++++++++++++++++++++++++---------- mm/madvise.c | 5 ++- mm/memory.c | 2 +- 5 files changed, 86 insertions(+), 24 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3568b90b397d..badcb277603d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -158,6 +158,8 @@ long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); +void clear_hugetlb_page_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end); void __unmap_hugepage_range_final(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, @@ -428,6 +430,11 @@ static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb, BUG(); } +static void __maybe_unused clear_hugetlb_page_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ +} + static inline vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags) diff --git a/include/linux/mm.h b/include/linux/mm.h index 978c17df053e..517c8cc8ccb9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3464,4 +3464,7 @@ madvise_set_anon_name(struct mm_struct *mm, unsigned long start, */ #define ZAP_FLAG_DROP_MARKER ((__force zap_flags_t) BIT(0)) +/* Set in unmap_vmas() to indicate an unmap call. Only used by hugetlb */ +#define ZAP_FLAG_UNMAP ((__force zap_flags_t) BIT(1)) + #endif /* _LINUX_MM_H */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4a0289ef09fa..0309a7c0f3bc 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5062,7 +5062,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags) + struct page *ref_page, zap_flags_t zap_flags, + bool notification_needed) { struct mm_struct *mm = vma->vm_mm; unsigned long address; @@ -5087,13 +5088,16 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct tlb_change_page_size(tlb, sz); tlb_start_vma(tlb, vma); - /* - * If sharing possible, alert mmu notifiers of worst case. - */ - mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, start, - end); - adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); - mmu_notifier_invalidate_range_start(&range); + if (notification_needed) { + /* + * If sharing possible, alert mmu notifiers of worst case. + */ + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, + start, end); + adjust_range_if_pmd_sharing_possible(vma, &range.start, + &range.end); + mmu_notifier_invalidate_range_start(&range); + } last_addr_mask = hugetlb_mask_last_page(h); address = start; for (; address < end; address += sz) { @@ -5178,7 +5182,8 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct if (ref_page) break; } - mmu_notifier_invalidate_range_end(&range); + if (notification_needed) + mmu_notifier_invalidate_range_end(&range); tlb_end_vma(tlb, vma); /* @@ -5198,29 +5203,72 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct tlb_flush_mmu_tlbonly(tlb); } -void __unmap_hugepage_range_final(struct mmu_gather *tlb, +static void __unmap_hugepage_range_locking(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) { + bool final = zap_flags & ZAP_FLAG_UNMAP; + hugetlb_vma_lock_write(vma); i_mmap_lock_write(vma->vm_file->f_mapping); - __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); + __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags, + false); - /* - * Unlock and free the vma lock before releasing i_mmap_rwsem. When - * the vma_lock is freed, this makes the vma ineligible for pmd - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. - * This is important as page tables for this unmapped range will - * be asynchrously deleted. If the page tables are shared, there - * will be issues when accessed by someone else. - */ - __hugetlb_vma_unlock_write_free(vma); + if (final) { + /* + * Unlock and free the vma lock before releasing i_mmap_rwsem. + * When the vma_lock is freed, this makes the vma ineligible + * for pmd sharing. And, i_mmap_rwsem is required to set up + * pmd sharing. This is important as page tables for this + * unmapped range will be asynchrously deleted. If the page + * tables are shared, there will be issues when accessed by + * someone else. + */ + __hugetlb_vma_unlock_write_free(vma); + i_mmap_unlock_write(vma->vm_file->f_mapping); + } else { + i_mmap_unlock_write(vma->vm_file->f_mapping); + hugetlb_vma_unlock_write(vma); + } +} - i_mmap_unlock_write(vma->vm_file->f_mapping); +void __unmap_hugepage_range_final(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, + unsigned long end, struct page *ref_page, + zap_flags_t zap_flags) +{ + __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page, + zap_flags); } +#ifdef CONFIG_ADVISE_SYSCALLS +/* + * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call + * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete + * the associated vma_lock. + */ +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + struct mmu_notifier_range range; + struct mmu_gather tlb; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + start, end); + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); + tlb_gather_mmu(&tlb, vma->vm_mm); + update_hiwater_rss(vma->vm_mm); + mmu_notifier_invalidate_range_start(&range); + + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0); + + mmu_notifier_invalidate_range_end(&range); + tlb_finish_mmu(&tlb); +} +#endif + void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) @@ -5228,7 +5276,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, struct mmu_gather tlb; tlb_gather_mmu(&tlb, vma->vm_mm); - __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags); + __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags, + true); tlb_finish_mmu(&tlb); } diff --git a/mm/madvise.c b/mm/madvise.c index c7105ec6d08c..d8b4d7e56939 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, static long madvise_dontneed_single_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - zap_page_range(vma, start, end - start); + if (!is_vm_hugetlb_page(vma)) + zap_page_range(vma, start, end - start); + else + clear_hugetlb_page_range(vma, start, end); return 0; } diff --git a/mm/memory.c b/mm/memory.c index c5599a9279b1..679b702af4ce 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1671,7 +1671,7 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, { struct mmu_notifier_range range; struct zap_details details = { - .zap_flags = ZAP_FLAG_DROP_MARKER, + .zap_flags = ZAP_FLAG_DROP_MARKER | ZAP_FLAG_UNMAP, /* Careful - we need to zap private pages too! */ .even_cows = true, };
On Fri, Oct 28, 2022 at 08:23:25AM -0700, Mike Kravetz wrote: > On 10/26/22 21:12, Peter Xu wrote: > > On Wed, Oct 26, 2022 at 04:54:01PM -0700, Mike Kravetz wrote: > > > On 10/26/22 17:42, Peter Xu wrote: > > > > > > > > Pure question: can we rely on hugetlb_vm_op_close() to destroy the hugetlb > > > > vma lock? > > > > > > > > I read the comment above, it seems we are trying to avoid racing with pmd > > > > sharing, but I don't see how that could ever happen, since iiuc there > > > > should only be two places that unmaps the vma (final==true): > > > > > > > > (1) munmap: we're holding write lock, so no page fault possible > > > > (2) exit_mmap: we've already reset current->mm so no page fault possible > > > > > > > > > > Thanks for taking a look Peter! > > > > > > The possible sharing we are trying to stop would be initiated by a fault > > > in a different process on the same underlying mapping object (inode). The > > > specific vma in exit processing is still linked into the mapping interval > > > tree. So, even though we call huge_pmd_unshare in the unmap processing (in > > > __unmap_hugepage_range) the sharing could later be initiated by another > > > process. > > > > > > Hope that makes sense. That is also the reason the routine > > > page_table_shareable contains this check: > > > > > > /* > > > * match the virtual addresses, permission and the alignment of the > > > * page table page. > > > * > > > * Also, vma_lock (vm_private_data) is required for sharing. > > > */ > > > if (pmd_index(addr) != pmd_index(saddr) || > > > vm_flags != svm_flags || > > > !range_in_vma(svma, sbase, s_end) || > > > !svma->vm_private_data) > > > return 0; > > > > Ah, makes sense. Hmm, then I'm wondering whether hugetlb_vma_lock_free() > > would ever be useful at all? Because remove_vma() (or say, the close() > > hook) seems to always be called after an precedent unmap_vmas(). > > You are right. hugetlb_vma_lock_free will almost always be a noop when > called from the close hook. It is still 'needed' for vms setup error > pathss. Ah, yes. Not sure whether it would be worthwhile to have a comment for that in the close() hook, because it's rare that the vma lock is released (and need to be released) before the vma destroy hook function. The pmd unsharing definitely complicates things. In all cases, definitely worth a repost for this, only to raise this point up. > > > > > > +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, > > > > > + unsigned long end) > > > > > +{ > > > > > + struct mmu_notifier_range range; > > > > > + struct mmu_gather tlb; > > > > > + > > > > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > > > > > + start, end); > > > > > > > > Is mmu_notifier_invalidate_range_start() missing here? > > > > > > > > > > It certainly does look like it. When I created this routine, I was trying to > > > mimic what was done in the current calling path zap_page_range to > > > __unmap_hugepage_range_final. Now when I look at that, I am not seeing > > > a mmu_notifier_invalidate_range_start/end. Am I missing something, or > > > are these missing today? > > > > I'm not sure whether we're looking at the same code base; here it's in > > zap_page_range() itself. > > > > mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > > start, start + size); > > tlb_gather_mmu(&tlb, vma->vm_mm); > > update_hiwater_rss(vma->vm_mm); > > mmu_notifier_invalidate_range_start(&range); > > do { > > unmap_single_vma(&tlb, vma, start, range.end, NULL); > > } while ((vma = mas_find(&mas, end - 1)) != NULL); > > mmu_notifier_invalidate_range_end(&range); > > Yes, I missed that. Thanks! > > > > > > Do note that we do MMU_NOTIFY_UNMAP in __unmap_hugepage_range. > > > > Hmm, I think we may want CLEAR for zap-only and UNMAP only for unmap. > > > > * @MMU_NOTIFY_UNMAP: either munmap() that unmap the range or a mremap() that > > * move the range > > * @MMU_NOTIFY_CLEAR: clear page table entry (many reasons for this like > > * madvise() or replacing a page by another one, ...). > > > > The other thing is that unmap_vmas() also notifies (same to > > zap_page_range), it looks a duplicated notification if any of them calls > > __unmap_hugepage_range() at last. > > The only call into __unmap_hugepage_range() from generic zap/unmap calls > is via __unmap_hugepage_range_final. Other call paths are entirely > within hugetlb code. Right, the duplication only happens on the outside-hugetlb (aka generic mm) calls. I saw that below it's being considered, thanks. Though I had a (maybe...) better thought, more below. > > > > > > + tlb_gather_mmu(&tlb, vma->vm_mm); > > > > > + update_hiwater_rss(vma->vm_mm); > > > > > + > > > > > + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0, false); > > > > > + > > > > > + mmu_notifier_invalidate_range_end(&range); > > > > > + tlb_finish_mmu(&tlb); > > > > > +} > > > > > +#endif > > > > > + > > > > > void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, > > > > > unsigned long end, struct page *ref_page, > > > > > zap_flags_t zap_flags) > > > > > diff --git a/mm/madvise.c b/mm/madvise.c > > > > > index 2baa93ca2310..90577a669635 100644 > > > > > --- a/mm/madvise.c > > > > > +++ b/mm/madvise.c > > > > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > > > > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > > > > > unsigned long start, unsigned long end) > > > > > { > > > > > - zap_page_range(vma, start, end - start); > > > > > + if (!is_vm_hugetlb_page(vma)) > > > > > + zap_page_range(vma, start, end - start); > > > > > + else > > > > > + clear_hugetlb_page_range(vma, start, end); > > > > > return 0; > > > > > } > > > > > > > > This does look a bit unfortunate - zap_page_range() contains yet another > > > > is_vm_hugetlb_page() check (further down in unmap_single_vma), it can be > > > > very confusing on which code path is really handling hugetlb. > > > > > > > > The other mm_users check in v3 doesn't need this change, but was a bit > > > > hackish to me, because IIUC we're clear on the call paths to trigger this > > > > (unmap_vmas), so it seems clean to me to pass that info from the upper > > > > stack. > > > > > > > > Maybe we can have a new zap_flags passed into unmap_single_vma() showing > > > > that it's destroying the vma? > > > > > > I thought about that. However, we would need to start passing the flag > > > here into zap_page_range as this is the beginning of that call down into > > > the hugetlb code where we do not want to remove zap_page_rangethe > > > vma_lock. > > > > Right. I was thinking just attach the new flag in unmap_vmas(). A pesudo > > (not compiled) code attached. > > I took your suggestions and came up with a new version of this patch. Not > sure if I love the new zap flag, as it is only used by hugetlb code. I also > added a bool to __unmap_hugepage_range to eliminate the duplicate notification > calls. > > From 15ffe922b60af9f4c19927d5d5aaca75840d0f6c Mon Sep 17 00:00:00 2001 > From: Mike Kravetz <mike.kravetz@oracle.com> > Date: Fri, 28 Oct 2022 07:46:50 -0700 > Subject: [PATCH v5] hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED > processing > > madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the page > tables associated with the address range. For hugetlb vmas, > zap_page_range will call __unmap_hugepage_range_final. However, > __unmap_hugepage_range_final assumes the passed vma is about to be removed > and deletes the vma_lock to prevent pmd sharing as the vma is on the way > out. In the case of madvise(MADV_DONTNEED) the vma remains, but the > missing vma_lock prevents pmd sharing and could potentially lead to issues > with truncation/fault races. > > This issue was originally reported here [1] as a BUG triggered in > page_try_dup_anon_rmap. Prior to the introduction of the hugetlb > vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to > prevent pmd sharing. Subsequent faults on this vma were confused as > VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping was > not set in new pages added to the page table. This resulted in pages that > appeared anonymous in a VM_SHARED vma and triggered the BUG. > > Create a new routine clear_hugetlb_page_range() that can be called from > madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as > zap_page_range, but does not delete the vma_lock. Also, add a new zap > flag ZAP_FLAG_UNMAP to indicate an unmap call from unmap_vmas(). This > is used to indicate the 'final' unmapping of a vma. The routine > __unmap_hugepage_range to take a notification_needed argument. This is > used to prevent duplicate notifications. > > [1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6JudD0g@mail.gmail.com/ > Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings") > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > Reported-by: Wei Chen <harperchen1110@gmail.com> > Cc: <stable@vger.kernel.org> > --- > include/linux/hugetlb.h | 7 ++++ > include/linux/mm.h | 3 ++ > mm/hugetlb.c | 93 +++++++++++++++++++++++++++++++---------- > mm/madvise.c | 5 ++- > mm/memory.c | 2 +- > 5 files changed, 86 insertions(+), 24 deletions(-) > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index 3568b90b397d..badcb277603d 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -158,6 +158,8 @@ long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, > void unmap_hugepage_range(struct vm_area_struct *, > unsigned long, unsigned long, struct page *, > zap_flags_t); > +void clear_hugetlb_page_range(struct vm_area_struct *vma, > + unsigned long start, unsigned long end); > void __unmap_hugepage_range_final(struct mmu_gather *tlb, > struct vm_area_struct *vma, > unsigned long start, unsigned long end, > @@ -428,6 +430,11 @@ static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb, > BUG(); > } > > +static void __maybe_unused clear_hugetlb_page_range(struct vm_area_struct *vma, > + unsigned long start, unsigned long end) > +{ > +} > + > static inline vm_fault_t hugetlb_fault(struct mm_struct *mm, > struct vm_area_struct *vma, unsigned long address, > unsigned int flags) > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 978c17df053e..517c8cc8ccb9 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3464,4 +3464,7 @@ madvise_set_anon_name(struct mm_struct *mm, unsigned long start, > */ > #define ZAP_FLAG_DROP_MARKER ((__force zap_flags_t) BIT(0)) > > +/* Set in unmap_vmas() to indicate an unmap call. Only used by hugetlb */ > +#define ZAP_FLAG_UNMAP ((__force zap_flags_t) BIT(1)) > + > #endif /* _LINUX_MM_H */ > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 4a0289ef09fa..0309a7c0f3bc 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5062,7 +5062,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, > > static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, > unsigned long start, unsigned long end, > - struct page *ref_page, zap_flags_t zap_flags) > + struct page *ref_page, zap_flags_t zap_flags, > + bool notification_needed) > { > struct mm_struct *mm = vma->vm_mm; > unsigned long address; > @@ -5087,13 +5088,16 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct > tlb_change_page_size(tlb, sz); > tlb_start_vma(tlb, vma); > > - /* > - * If sharing possible, alert mmu notifiers of worst case. > - */ > - mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, start, > - end); > - adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); > - mmu_notifier_invalidate_range_start(&range); > + if (notification_needed) { I'm not 100% sure whether this is needed. Can we move the notification just outside of this function, to where it's needed? Based on the latest mm-unstable c59145c0aa2c, what I read is that it's only needed for unmap_hugepage_range() not __unmap_hugepage_range_locking() (these are the only two callers of __unmap_hugepage_range). Then maybe we can move these notifications into unmap_hugepage_range(). Also note that I _think_ when moving we should change UNMAP to CLEAR notifies too, but worth double check. > + /* > + * If sharing possible, alert mmu notifiers of worst case. > + */ > + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, > + start, end); > + adjust_range_if_pmd_sharing_possible(vma, &range.start, > + &range.end); > + mmu_notifier_invalidate_range_start(&range); > + } > last_addr_mask = hugetlb_mask_last_page(h); > address = start; > for (; address < end; address += sz) { > @@ -5178,7 +5182,8 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct > if (ref_page) > break; > } > - mmu_notifier_invalidate_range_end(&range); > + if (notification_needed) > + mmu_notifier_invalidate_range_end(&range); > tlb_end_vma(tlb, vma); > > /* > @@ -5198,29 +5203,72 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct > tlb_flush_mmu_tlbonly(tlb); > } > > -void __unmap_hugepage_range_final(struct mmu_gather *tlb, > +static void __unmap_hugepage_range_locking(struct mmu_gather *tlb, > struct vm_area_struct *vma, unsigned long start, > unsigned long end, struct page *ref_page, > zap_flags_t zap_flags) > { > + bool final = zap_flags & ZAP_FLAG_UNMAP; > + > hugetlb_vma_lock_write(vma); > i_mmap_lock_write(vma->vm_file->f_mapping); > > - __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); > + __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags, > + false); > > - /* > - * Unlock and free the vma lock before releasing i_mmap_rwsem. When > - * the vma_lock is freed, this makes the vma ineligible for pmd > - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. > - * This is important as page tables for this unmapped range will > - * be asynchrously deleted. If the page tables are shared, there > - * will be issues when accessed by someone else. > - */ > - __hugetlb_vma_unlock_write_free(vma); > + if (final) { > + /* > + * Unlock and free the vma lock before releasing i_mmap_rwsem. > + * When the vma_lock is freed, this makes the vma ineligible > + * for pmd sharing. And, i_mmap_rwsem is required to set up > + * pmd sharing. This is important as page tables for this > + * unmapped range will be asynchrously deleted. If the page > + * tables are shared, there will be issues when accessed by > + * someone else. > + */ > + __hugetlb_vma_unlock_write_free(vma); > + i_mmap_unlock_write(vma->vm_file->f_mapping); > + } else { > + i_mmap_unlock_write(vma->vm_file->f_mapping); > + hugetlb_vma_unlock_write(vma); > + } > +} > > - i_mmap_unlock_write(vma->vm_file->f_mapping); > +void __unmap_hugepage_range_final(struct mmu_gather *tlb, > + struct vm_area_struct *vma, unsigned long start, > + unsigned long end, struct page *ref_page, > + zap_flags_t zap_flags) > +{ > + __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page, > + zap_flags); > } > > +#ifdef CONFIG_ADVISE_SYSCALLS > +/* > + * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call > + * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete > + * the associated vma_lock. > + */ > +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, > + unsigned long end) > +{ > + struct mmu_notifier_range range; > + struct mmu_gather tlb; > + > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > + start, end); > + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); > + tlb_gather_mmu(&tlb, vma->vm_mm); > + update_hiwater_rss(vma->vm_mm); > + mmu_notifier_invalidate_range_start(&range); > + > + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0); > + > + mmu_notifier_invalidate_range_end(&range); > + tlb_finish_mmu(&tlb); > +} > +#endif > + > void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, > unsigned long end, struct page *ref_page, > zap_flags_t zap_flags) > @@ -5228,7 +5276,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, > struct mmu_gather tlb; > > tlb_gather_mmu(&tlb, vma->vm_mm); > - __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags); > + __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags, > + true); > tlb_finish_mmu(&tlb); > } > > diff --git a/mm/madvise.c b/mm/madvise.c > index c7105ec6d08c..d8b4d7e56939 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > unsigned long start, unsigned long end) > { > - zap_page_range(vma, start, end - start); > + if (!is_vm_hugetlb_page(vma)) > + zap_page_range(vma, start, end - start); > + else > + clear_hugetlb_page_range(vma, start, end); With the new ZAP_FLAG_UNMAP flag, clear_hugetlb_page_range() can be dropped completely? As zap_page_range() won't be with ZAP_FLAG_UNMAP so we can identify things? IIUC that's the major reason why I thought the zap flag could be helpful.. Thanks! > return 0; > } > > diff --git a/mm/memory.c b/mm/memory.c > index c5599a9279b1..679b702af4ce 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1671,7 +1671,7 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, > { > struct mmu_notifier_range range; > struct zap_details details = { > - .zap_flags = ZAP_FLAG_DROP_MARKER, > + .zap_flags = ZAP_FLAG_DROP_MARKER | ZAP_FLAG_UNMAP, > /* Careful - we need to zap private pages too! */ > .even_cows = true, > }; > -- > 2.37.3 >
On 10/28/22 12:13, Peter Xu wrote: > On Fri, Oct 28, 2022 at 08:23:25AM -0700, Mike Kravetz wrote: > > On 10/26/22 21:12, Peter Xu wrote: > > > On Wed, Oct 26, 2022 at 04:54:01PM -0700, Mike Kravetz wrote: > > > > On 10/26/22 17:42, Peter Xu wrote: > > diff --git a/mm/madvise.c b/mm/madvise.c > > index c7105ec6d08c..d8b4d7e56939 100644 > > --- a/mm/madvise.c > > +++ b/mm/madvise.c > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > > unsigned long start, unsigned long end) > > { > > - zap_page_range(vma, start, end - start); > > + if (!is_vm_hugetlb_page(vma)) > > + zap_page_range(vma, start, end - start); > > + else > > + clear_hugetlb_page_range(vma, start, end); > > With the new ZAP_FLAG_UNMAP flag, clear_hugetlb_page_range() can be dropped > completely? As zap_page_range() won't be with ZAP_FLAG_UNMAP so we can > identify things? > > IIUC that's the major reason why I thought the zap flag could be helpful.. Argh. I went to drop clear_hugetlb_page_range() but there is one issue. In zap_page_range() the MMU_NOTIFY_CLEAR notifier is certainly called. However, we really need to have a 'adjust_range_if_pmd_sharing_possible' call in there because the 'range' may be part of a shared pmd. :( I think we need to either have a separate routine like clear_hugetlb_page_range that sets up the appropriate range, or special case hugetlb in zap_page_range. What do you think? I think clear_hugetlb_page_range is the least bad of the two options.
On Fri, Oct 28, 2022 at 02:17:01PM -0700, Mike Kravetz wrote: > On 10/28/22 12:13, Peter Xu wrote: > > On Fri, Oct 28, 2022 at 08:23:25AM -0700, Mike Kravetz wrote: > > > On 10/26/22 21:12, Peter Xu wrote: > > > > On Wed, Oct 26, 2022 at 04:54:01PM -0700, Mike Kravetz wrote: > > > > > On 10/26/22 17:42, Peter Xu wrote: > > > diff --git a/mm/madvise.c b/mm/madvise.c > > > index c7105ec6d08c..d8b4d7e56939 100644 > > > --- a/mm/madvise.c > > > +++ b/mm/madvise.c > > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > > > unsigned long start, unsigned long end) > > > { > > > - zap_page_range(vma, start, end - start); > > > + if (!is_vm_hugetlb_page(vma)) > > > + zap_page_range(vma, start, end - start); > > > + else > > > + clear_hugetlb_page_range(vma, start, end); > > > > With the new ZAP_FLAG_UNMAP flag, clear_hugetlb_page_range() can be dropped > > completely? As zap_page_range() won't be with ZAP_FLAG_UNMAP so we can > > identify things? > > > > IIUC that's the major reason why I thought the zap flag could be helpful.. > > Argh. I went to drop clear_hugetlb_page_range() but there is one issue. > In zap_page_range() the MMU_NOTIFY_CLEAR notifier is certainly called. > However, we really need to have a 'adjust_range_if_pmd_sharing_possible' > call in there because the 'range' may be part of a shared pmd. :( > > I think we need to either have a separate routine like clear_hugetlb_page_range > that sets up the appropriate range, or special case hugetlb in zap_page_range. > What do you think? > I think clear_hugetlb_page_range is the least bad of the two options. How about special case hugetlb as you mentioned? If I'm not wrong, it should be 3 lines change: ---8<--- diff --git a/mm/memory.c b/mm/memory.c index c5599a9279b1..0a1632e44571 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1706,11 +1706,13 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, lru_add_drain(); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, start, start + size); + if (is_vm_hugetlb_page(vma)) + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); do { - unmap_single_vma(&tlb, vma, start, range.end, NULL); + unmap_single_vma(&tlb, vma, start, start + size, NULL); } while ((vma = mas_find(&mas, end - 1)) != NULL); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); ---8<--- As zap_page_range() is already vma-oriented anyway. But maybe I missed something important?
On 10/28/22 19:20, Peter Xu wrote: > On Fri, Oct 28, 2022 at 02:17:01PM -0700, Mike Kravetz wrote: > > On 10/28/22 12:13, Peter Xu wrote: > > > On Fri, Oct 28, 2022 at 08:23:25AM -0700, Mike Kravetz wrote: > > > > On 10/26/22 21:12, Peter Xu wrote: > > > > > On Wed, Oct 26, 2022 at 04:54:01PM -0700, Mike Kravetz wrote: > > > > > > On 10/26/22 17:42, Peter Xu wrote: > > > > diff --git a/mm/madvise.c b/mm/madvise.c > > > > index c7105ec6d08c..d8b4d7e56939 100644 > > > > --- a/mm/madvise.c > > > > +++ b/mm/madvise.c > > > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, > > > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma, > > > > unsigned long start, unsigned long end) > > > > { > > > > - zap_page_range(vma, start, end - start); > > > > + if (!is_vm_hugetlb_page(vma)) > > > > + zap_page_range(vma, start, end - start); > > > > + else > > > > + clear_hugetlb_page_range(vma, start, end); > > > > > > With the new ZAP_FLAG_UNMAP flag, clear_hugetlb_page_range() can be dropped > > > completely? As zap_page_range() won't be with ZAP_FLAG_UNMAP so we can > > > identify things? > > > > > > IIUC that's the major reason why I thought the zap flag could be helpful.. > > > > Argh. I went to drop clear_hugetlb_page_range() but there is one issue. > > In zap_page_range() the MMU_NOTIFY_CLEAR notifier is certainly called. > > However, we really need to have a 'adjust_range_if_pmd_sharing_possible' > > call in there because the 'range' may be part of a shared pmd. :( > > > > I think we need to either have a separate routine like clear_hugetlb_page_range > > that sets up the appropriate range, or special case hugetlb in zap_page_range. > > What do you think? > > I think clear_hugetlb_page_range is the least bad of the two options. > > How about special case hugetlb as you mentioned? If I'm not wrong, it > should be 3 lines change: > > ---8<--- > diff --git a/mm/memory.c b/mm/memory.c > index c5599a9279b1..0a1632e44571 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1706,11 +1706,13 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, > lru_add_drain(); > mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > start, start + size); > + if (is_vm_hugetlb_page(vma)) > + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); > tlb_gather_mmu(&tlb, vma->vm_mm); > update_hiwater_rss(vma->vm_mm); > mmu_notifier_invalidate_range_start(&range); > do { > - unmap_single_vma(&tlb, vma, start, range.end, NULL); > + unmap_single_vma(&tlb, vma, start, start + size, NULL); > } while ((vma = mas_find(&mas, end - 1)) != NULL); > mmu_notifier_invalidate_range_end(&range); > tlb_finish_mmu(&tlb); > ---8<--- > > As zap_page_range() is already vma-oriented anyway. But maybe I missed > something important? zap_page_range is a bit confusing. It appears that the passed range can span multiple vmas. Otherwise, there would be no do while loop. Yet, there is only one mmu_notifier_range_init call specifying the passed vma. It appears all callers pass a range entirely within a single vma. The modifications above would work for a range within a single vma. However, things would be more complicated if the range can indeed span multiple vmas. For multiple vmas, we would need to check the first and last vmas for pmd sharing. Anyone know more about this seeming confusing behavior? Perhaps, range spanning multiple vmas was left over earlier code?
On Oct 29, 2022, at 5:15 PM, Mike Kravetz <mike.kravetz@oracle.com> wrote: > zap_page_range is a bit confusing. It appears that the passed range can > span multiple vmas. Otherwise, there would be no do while loop. Yet, there > is only one mmu_notifier_range_init call specifying the passed vma. > > It appears all callers pass a range entirely within a single vma. > > The modifications above would work for a range within a single vma. However, > things would be more complicated if the range can indeed span multiple vmas. > For multiple vmas, we would need to check the first and last vmas for > pmd sharing. > > Anyone know more about this seeming confusing behavior? Perhaps, range > spanning multiple vmas was left over earlier code? I don’t have personal knowledge, but I noticed that it does not make much sense, at least for MADV_DONTNEED. I tried to batch the TLB flushes across VMAs for madvise’s. [1] Need to get to it sometime. [1] https://lore.kernel.org/lkml/20210926161259.238054-7-namit@vmware.com/
On Sat, Oct 29, 2022 at 05:54:44PM -0700, Nadav Amit wrote: > On Oct 29, 2022, at 5:15 PM, Mike Kravetz <mike.kravetz@oracle.com> wrote: > > > zap_page_range is a bit confusing. It appears that the passed range can > > span multiple vmas. Otherwise, there would be no do while loop. Yet, there > > is only one mmu_notifier_range_init call specifying the passed vma. > > > > It appears all callers pass a range entirely within a single vma. > > > > The modifications above would work for a range within a single vma. However, > > things would be more complicated if the range can indeed span multiple vmas. > > For multiple vmas, we would need to check the first and last vmas for > > pmd sharing. > > > > Anyone know more about this seeming confusing behavior? Perhaps, range > > spanning multiple vmas was left over earlier code? > > I don’t have personal knowledge, but I noticed that it does not make much > sense, at least for MADV_DONTNEED. I tried to batch the TLB flushes across > VMAs for madvise’s. [1] The loop comes from 7e027b14d53e ("vm: simplify unmap_vmas() calling convention", 2012-05-06), where zap_page_range() was used to replace a call to unmap_vmas() because the patch wanted to eliminate the zap details pointer for unmap_vmas(), which makes sense. I didn't check the old code, but from what I can tell (and also as Mike pointed out) I don't think zap_page_range() in the lastest code base is ever used on multi-vma at all. Otherwise the mmu notifier is already broken - see mmu_notifier_range_init() where the vma pointer is also part of the notification. Perhaps we should just remove the loop? > > Need to get to it sometime. > > [1] https://lore.kernel.org/lkml/20210926161259.238054-7-namit@vmware.com/ >
On Oct 30, 2022, at 11:43 AM, Peter Xu <peterx@redhat.com> wrote: > The loop comes from 7e027b14d53e ("vm: simplify unmap_vmas() calling > convention", 2012-05-06), where zap_page_range() was used to replace a call > to unmap_vmas() because the patch wanted to eliminate the zap details > pointer for unmap_vmas(), which makes sense. > > I didn't check the old code, but from what I can tell (and also as Mike > pointed out) I don't think zap_page_range() in the lastest code base is > ever used on multi-vma at all. Otherwise the mmu notifier is already > broken - see mmu_notifier_range_init() where the vma pointer is also part > of the notification. > > Perhaps we should just remove the loop? There is already zap_page_range_single() that does exactly that. Just need to export it.
On 10/30/22 11:52, Nadav Amit wrote: > On Oct 30, 2022, at 11:43 AM, Peter Xu <peterx@redhat.com> wrote: > > > The loop comes from 7e027b14d53e ("vm: simplify unmap_vmas() calling > > convention", 2012-05-06), where zap_page_range() was used to replace a call > > to unmap_vmas() because the patch wanted to eliminate the zap details > > pointer for unmap_vmas(), which makes sense. > > > > I didn't check the old code, but from what I can tell (and also as Mike > > pointed out) I don't think zap_page_range() in the lastest code base is > > ever used on multi-vma at all. Otherwise the mmu notifier is already > > broken - see mmu_notifier_range_init() where the vma pointer is also part > > of the notification. > > > > Perhaps we should just remove the loop? > > There is already zap_page_range_single() that does exactly that. Just need > to export it. I was thinking that zap_page_range() should perform a notification call for each vma within the loop. Something like this? @@ -1704,15 +1704,21 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, MA_STATE(mas, mt, vma->vm_end, vma->vm_end); lru_add_drain(); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, - start, start + size); tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); - mmu_notifier_invalidate_range_start(&range); do { - unmap_single_vma(&tlb, vma, start, range.end, NULL); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, + vma->vm_mm, + max(start, vma->vm_start), + min(start + size, vma->vm_end)); + if (is_vm_hugetlb_page(vma)) + adjust_range_if_pmd_sharing_possible(vma, + &range.start, + &range.end); + mmu_notifier_invalidate_range_start(&range); + unmap_single_vma(&tlb, vma, start, start + size, NULL); + mmu_notifier_invalidate_range_end(&range); } while ((vma = mas_find(&mas, end - 1)) != NULL); - mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } One thing to keep in mind is that this patch is a fix that must be backported to stable. Therefore, I do not think we want to add too many changes out of the direct scope of the fix. We can always change things like this in follow up patches.
On Sun, Oct 30, 2022 at 06:44:10PM -0700, Mike Kravetz wrote: > On 10/30/22 11:52, Nadav Amit wrote: > > On Oct 30, 2022, at 11:43 AM, Peter Xu <peterx@redhat.com> wrote: > > > > > The loop comes from 7e027b14d53e ("vm: simplify unmap_vmas() calling > > > convention", 2012-05-06), where zap_page_range() was used to replace a call > > > to unmap_vmas() because the patch wanted to eliminate the zap details > > > pointer for unmap_vmas(), which makes sense. > > > > > > I didn't check the old code, but from what I can tell (and also as Mike > > > pointed out) I don't think zap_page_range() in the lastest code base is > > > ever used on multi-vma at all. Otherwise the mmu notifier is already > > > broken - see mmu_notifier_range_init() where the vma pointer is also part > > > of the notification. > > > > > > Perhaps we should just remove the loop? > > > > There is already zap_page_range_single() that does exactly that. Just need > > to export it. > > I was thinking that zap_page_range() should perform a notification call for > each vma within the loop. Something like this? I'm boldly guessing what Nadav suggested was using zap_page_range_single() and export it for MADV_DONTNEED. Hopefully that's also the easiest for stable? For the long term, I really think we should just get rid of the loop.. > > @@ -1704,15 +1704,21 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, > MA_STATE(mas, mt, vma->vm_end, vma->vm_end); > > lru_add_drain(); > - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > - start, start + size); > tlb_gather_mmu(&tlb, vma->vm_mm); > update_hiwater_rss(vma->vm_mm); > - mmu_notifier_invalidate_range_start(&range); > do { > - unmap_single_vma(&tlb, vma, start, range.end, NULL); > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, > + vma->vm_mm, > + max(start, vma->vm_start), > + min(start + size, vma->vm_end)); > + if (is_vm_hugetlb_page(vma)) > + adjust_range_if_pmd_sharing_possible(vma, > + &range.start, > + &range.end); > + mmu_notifier_invalidate_range_start(&range); > + unmap_single_vma(&tlb, vma, start, start + size, NULL); > + mmu_notifier_invalidate_range_end(&range); > } while ((vma = mas_find(&mas, end - 1)) != NULL); > - mmu_notifier_invalidate_range_end(&range); > tlb_finish_mmu(&tlb); > } > > > One thing to keep in mind is that this patch is a fix that must be > backported to stable. Therefore, I do not think we want to add too > many changes out of the direct scope of the fix. > > We can always change things like this in follow up patches. > -- > Mike Kravetz >
On 11/02/22 15:24, Peter Xu wrote: > On Sun, Oct 30, 2022 at 06:44:10PM -0700, Mike Kravetz wrote: > > On 10/30/22 11:52, Nadav Amit wrote: > > > On Oct 30, 2022, at 11:43 AM, Peter Xu <peterx@redhat.com> wrote: > > > > > > > The loop comes from 7e027b14d53e ("vm: simplify unmap_vmas() calling > > > > convention", 2012-05-06), where zap_page_range() was used to replace a call > > > > to unmap_vmas() because the patch wanted to eliminate the zap details > > > > pointer for unmap_vmas(), which makes sense. > > > > > > > > I didn't check the old code, but from what I can tell (and also as Mike > > > > pointed out) I don't think zap_page_range() in the lastest code base is > > > > ever used on multi-vma at all. Otherwise the mmu notifier is already > > > > broken - see mmu_notifier_range_init() where the vma pointer is also part > > > > of the notification. > > > > > > > > Perhaps we should just remove the loop? > > > > > > There is already zap_page_range_single() that does exactly that. Just need > > > to export it. > > > > I was thinking that zap_page_range() should perform a notification call for > > each vma within the loop. Something like this? > > I'm boldly guessing what Nadav suggested was using zap_page_range_single() > and export it for MADV_DONTNEED. Hopefully that's also the easiest for > stable? I started making this change, then noticed that zap_vma_ptes() just calls zap_page_range_single(). And, it is already exported. That may be a better fit since exporting zap_page_range_single would require a wrapper as I do not think we want to export struct zap_details as well. In any case, we still need to add the adjust_range_if_pmd_sharing_possible() call to zap_page_range_single. > > For the long term, I really think we should just get rid of the loop.. > Yes. It will look a little strange if adjust_range_if_pmd_sharing_possible is added to zap_page_range_single but not zap_page_range. And, to properly add it to zap_page_range means rewriting the routine as I did here: https://lore.kernel.org/linux-mm/20221102013100.455139-1-mike.kravetz@oracle.com/
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index a899bc76d677..0246e77be3a3 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -158,6 +158,8 @@ long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); +void clear_hugetlb_page_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end); void __unmap_hugepage_range_final(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, @@ -426,6 +428,11 @@ static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb, BUG(); } +static void __maybe_unused clear_hugetlb_page_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ +} + static inline vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 931789a8f734..807cfd2884fa 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5202,29 +5202,67 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct tlb_flush_mmu_tlbonly(tlb); } -void __unmap_hugepage_range_final(struct mmu_gather *tlb, +static void __unmap_hugepage_range_locking(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, - zap_flags_t zap_flags) + zap_flags_t zap_flags, bool final) { hugetlb_vma_lock_write(vma); i_mmap_lock_write(vma->vm_file->f_mapping); __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); - /* - * Unlock and free the vma lock before releasing i_mmap_rwsem. When - * the vma_lock is freed, this makes the vma ineligible for pmd - * sharing. And, i_mmap_rwsem is required to set up pmd sharing. - * This is important as page tables for this unmapped range will - * be asynchrously deleted. If the page tables are shared, there - * will be issues when accessed by someone else. - */ - __hugetlb_vma_unlock_write_free(vma); + if (final) { + /* + * Unlock and free the vma lock before releasing i_mmap_rwsem. + * When the vma_lock is freed, this makes the vma ineligible + * for pmd sharing. And, i_mmap_rwsem is required to set up + * pmd sharing. This is important as page tables for this + * unmapped range will be asynchrously deleted. If the page + * tables are shared, there will be issues when accessed by + * someone else. + */ + __hugetlb_vma_unlock_write_free(vma); + i_mmap_unlock_write(vma->vm_file->f_mapping); + } else { + i_mmap_unlock_write(vma->vm_file->f_mapping); + hugetlb_vma_unlock_write(vma); + } +} - i_mmap_unlock_write(vma->vm_file->f_mapping); +void __unmap_hugepage_range_final(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, + unsigned long end, struct page *ref_page, + zap_flags_t zap_flags) +{ + __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page, + zap_flags, true); } +#ifdef CONFIG_ADVISE_SYSCALLS +/* + * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call + * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete + * the associated vma_lock. + */ +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + struct mmu_notifier_range range; + struct mmu_gather tlb; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + start, end); + tlb_gather_mmu(&tlb, vma->vm_mm); + update_hiwater_rss(vma->vm_mm); + + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0, false); + + mmu_notifier_invalidate_range_end(&range); + tlb_finish_mmu(&tlb); +} +#endif + void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) diff --git a/mm/madvise.c b/mm/madvise.c index 2baa93ca2310..90577a669635 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, static long madvise_dontneed_single_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - zap_page_range(vma, start, end - start); + if (!is_vm_hugetlb_page(vma)) + zap_page_range(vma, start, end - start); + else + clear_hugetlb_page_range(vma, start, end); return 0; }